ghsa-59g4-mvcc-7p3x
Vulnerability from github
Published
2025-04-16 15:34
Modified
2025-04-16 15:34
VLAI Severity ?
Details
In the Linux kernel, the following vulnerability has been resolved:
net: ibmveth: make veth_pool_store stop hanging
v2: - Created a single error handling unlock and exit in veth_pool_store - Greatly expanded commit message with previous explanatory-only text
Summary: Use rtnl_mutex to synchronize veth_pool_store with itself, ibmveth_close and ibmveth_open, preventing multiple calls in a row to napi_disable.
Background: Two (or more) threads could call veth_pool_store through writing to /sys/devices/vio/30000002/pool/. You can do this easily with a little shell script. This causes a hang.
I configured LOCKDEP, compiled ibmveth.c with DEBUG, and built a new kernel. I ran this test again and saw:
Setting pool0/active to 0
Setting pool1/active to 1
[ 73.911067][ T4365] ibmveth 30000002 eth0: close starting
Setting pool1/active to 1
Setting pool1/active to 0
[ 73.911367][ T4366] ibmveth 30000002 eth0: close starting
[ 73.916056][ T4365] ibmveth 30000002 eth0: close complete
[ 73.916064][ T4365] ibmveth 30000002 eth0: open starting
[ 110.808564][ T712] systemd-journald[712]: Sent WATCHDOG=1 notification.
[ 230.808495][ T712] systemd-journald[712]: Sent WATCHDOG=1 notification.
[ 243.683786][ T123] INFO: task stress.sh:4365 blocked for more than 122 seconds.
[ 243.683827][ T123] Not tainted 6.14.0-01103-g2df0c02dab82-dirty #8
[ 243.683833][ T123] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 243.683838][ T123] task:stress.sh state:D stack:28096 pid:4365 tgid:4365 ppid:4364 task_flags:0x400040 flags:0x00042000
[ 243.683852][ T123] Call Trace:
[ 243.683857][ T123] [c00000000c38f690] [0000000000000001] 0x1 (unreliable)
[ 243.683868][ T123] [c00000000c38f840] [c00000000001f908] __switch_to+0x318/0x4e0
[ 243.683878][ T123] [c00000000c38f8a0] [c000000001549a70] __schedule+0x500/0x12a0
[ 243.683888][ T123] [c00000000c38f9a0] [c00000000154a878] schedule+0x68/0x210
[ 243.683896][ T123] [c00000000c38f9d0] [c00000000154ac80] schedule_preempt_disabled+0x30/0x50
[ 243.683904][ T123] [c00000000c38fa00] [c00000000154dbb0] __mutex_lock+0x730/0x10f0
[ 243.683913][ T123] [c00000000c38fb10] [c000000001154d40] napi_enable+0x30/0x60
[ 243.683921][ T123] [c00000000c38fb40] [c000000000f4ae94] ibmveth_open+0x68/0x5dc
[ 243.683928][ T123] [c00000000c38fbe0] [c000000000f4aa20] veth_pool_store+0x220/0x270
[ 243.683936][ T123] [c00000000c38fc70] [c000000000826278] sysfs_kf_write+0x68/0xb0
[ 243.683944][ T123] [c00000000c38fcb0] [c0000000008240b8] kernfs_fop_write_iter+0x198/0x2d0
[ 243.683951][ T123] [c00000000c38fd00] [c00000000071b9ac] vfs_write+0x34c/0x650
[ 243.683958][ T123] [c00000000c38fdc0] [c00000000071bea8] ksys_write+0x88/0x150
[ 243.683966][ T123] [c00000000c38fe10] [c0000000000317f4] system_call_exception+0x124/0x340
[ 243.683973][ T123] [c00000000c38fe50] [c00000000000d05c] system_call_vectored_common+0x15c/0x2ec
...
[ 243.684087][ T123] Showing all locks held in the system:
[ 243.684095][ T123] 1 lock held by khungtaskd/123:
[ 243.684099][ T123] #0: c00000000278e370 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x50/0x248
[ 243.684114][ T123] 4 locks held by stress.sh/4365:
[ 243.684119][ T123] #0: c00000003a4cd3f8 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x88/0x150
[ 243.684132][ T123] #1: c000000041aea888 (&of->mutex#2){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x154/0x2d0
[ 243.684143][ T123] #2: c0000000366fb9a8 (kn->active#64){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x160/0x2d0
[ 243.684155][ T123] #3: c000000035ff4cb8 (&dev->lock){+.+.}-{3:3}, at: napi_enable+0x30/0x60
[ 243.684166][ T123] 5 locks held by stress.sh/4366:
[ 243.684170][ T123] #0: c00000003a4cd3f8 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x88/0x150
[ 243.
---truncated---
{ "affected": [], "aliases": [ "CVE-2025-22053" ], "database_specific": { "cwe_ids": [], "github_reviewed": false, "github_reviewed_at": null, "nvd_published_at": "2025-04-16T15:15:58Z", "severity": null }, "details": "In the Linux kernel, the following vulnerability has been resolved:\n\nnet: ibmveth: make veth_pool_store stop hanging\n\nv2:\n- Created a single error handling unlock and exit in veth_pool_store\n- Greatly expanded commit message with previous explanatory-only text\n\nSummary: Use rtnl_mutex to synchronize veth_pool_store with itself,\nibmveth_close and ibmveth_open, preventing multiple calls in a row to\nnapi_disable.\n\nBackground: Two (or more) threads could call veth_pool_store through\nwriting to /sys/devices/vio/30000002/pool*/*. You can do this easily\nwith a little shell script. This causes a hang.\n\nI configured LOCKDEP, compiled ibmveth.c with DEBUG, and built a new\nkernel. I ran this test again and saw:\n\n Setting pool0/active to 0\n Setting pool1/active to 1\n [ 73.911067][ T4365] ibmveth 30000002 eth0: close starting\n Setting pool1/active to 1\n Setting pool1/active to 0\n [ 73.911367][ T4366] ibmveth 30000002 eth0: close starting\n [ 73.916056][ T4365] ibmveth 30000002 eth0: close complete\n [ 73.916064][ T4365] ibmveth 30000002 eth0: open starting\n [ 110.808564][ T712] systemd-journald[712]: Sent WATCHDOG=1 notification.\n [ 230.808495][ T712] systemd-journald[712]: Sent WATCHDOG=1 notification.\n [ 243.683786][ T123] INFO: task stress.sh:4365 blocked for more than 122 seconds.\n [ 243.683827][ T123] Not tainted 6.14.0-01103-g2df0c02dab82-dirty #8\n [ 243.683833][ T123] \"echo 0 \u003e /proc/sys/kernel/hung_task_timeout_secs\" disables this message.\n [ 243.683838][ T123] task:stress.sh state:D stack:28096 pid:4365 tgid:4365 ppid:4364 task_flags:0x400040 flags:0x00042000\n [ 243.683852][ T123] Call Trace:\n [ 243.683857][ T123] [c00000000c38f690] [0000000000000001] 0x1 (unreliable)\n [ 243.683868][ T123] [c00000000c38f840] [c00000000001f908] __switch_to+0x318/0x4e0\n [ 243.683878][ T123] [c00000000c38f8a0] [c000000001549a70] __schedule+0x500/0x12a0\n [ 243.683888][ T123] [c00000000c38f9a0] [c00000000154a878] schedule+0x68/0x210\n [ 243.683896][ T123] [c00000000c38f9d0] [c00000000154ac80] schedule_preempt_disabled+0x30/0x50\n [ 243.683904][ T123] [c00000000c38fa00] [c00000000154dbb0] __mutex_lock+0x730/0x10f0\n [ 243.683913][ T123] [c00000000c38fb10] [c000000001154d40] napi_enable+0x30/0x60\n [ 243.683921][ T123] [c00000000c38fb40] [c000000000f4ae94] ibmveth_open+0x68/0x5dc\n [ 243.683928][ T123] [c00000000c38fbe0] [c000000000f4aa20] veth_pool_store+0x220/0x270\n [ 243.683936][ T123] [c00000000c38fc70] [c000000000826278] sysfs_kf_write+0x68/0xb0\n [ 243.683944][ T123] [c00000000c38fcb0] [c0000000008240b8] kernfs_fop_write_iter+0x198/0x2d0\n [ 243.683951][ T123] [c00000000c38fd00] [c00000000071b9ac] vfs_write+0x34c/0x650\n [ 243.683958][ T123] [c00000000c38fdc0] [c00000000071bea8] ksys_write+0x88/0x150\n [ 243.683966][ T123] [c00000000c38fe10] [c0000000000317f4] system_call_exception+0x124/0x340\n [ 243.683973][ T123] [c00000000c38fe50] [c00000000000d05c] system_call_vectored_common+0x15c/0x2ec\n ...\n [ 243.684087][ T123] Showing all locks held in the system:\n [ 243.684095][ T123] 1 lock held by khungtaskd/123:\n [ 243.684099][ T123] #0: c00000000278e370 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x50/0x248\n [ 243.684114][ T123] 4 locks held by stress.sh/4365:\n [ 243.684119][ T123] #0: c00000003a4cd3f8 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x88/0x150\n [ 243.684132][ T123] #1: c000000041aea888 (\u0026of-\u003emutex#2){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x154/0x2d0\n [ 243.684143][ T123] #2: c0000000366fb9a8 (kn-\u003eactive#64){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x160/0x2d0\n [ 243.684155][ T123] #3: c000000035ff4cb8 (\u0026dev-\u003elock){+.+.}-{3:3}, at: napi_enable+0x30/0x60\n [ 243.684166][ T123] 5 locks held by stress.sh/4366:\n [ 243.684170][ T123] #0: c00000003a4cd3f8 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x88/0x150\n [ 243.\n---truncated---", "id": "GHSA-59g4-mvcc-7p3x", "modified": "2025-04-16T15:34:41Z", "published": "2025-04-16T15:34:41Z", "references": [ { "type": "ADVISORY", "url": "https://nvd.nist.gov/vuln/detail/CVE-2025-22053" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/053f3ff67d7feefc75797863f3d84b47ad47086f" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/0a2470e3ecde64fc7e3781dc474923193621ae67" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/1e458c292f4c687dcf5aad32dd4836d03cd2191f" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/86cc70f5c85dc09bf7f3e1eee380eefe73c90765" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/8a88bb092f4208355880b9fdcc69d491aa297595" } ], "schema_version": "1.4.0", "severity": [] }
Loading…
Loading…
Sightings
Author | Source | Type | Date |
---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.
Loading…
Loading…