CVE-2022-49998 (GCVE-0-2022-49998)
Vulnerability from cvelistv5
Published
2025-06-18 11:00
Modified
2025-06-18 11:00
Severity ?
Summary
In the Linux kernel, the following vulnerability has been resolved: rxrpc: Fix locking in rxrpc's sendmsg Fix three bugs in the rxrpc's sendmsg implementation: (1) rxrpc_new_client_call() should release the socket lock when returning an error from rxrpc_get_call_slot(). (2) rxrpc_wait_for_tx_window_intr() will return without the call mutex held in the event that we're interrupted by a signal whilst waiting for tx space on the socket or relocking the call mutex afterwards. Fix this by: (a) moving the unlock/lock of the call mutex up to rxrpc_send_data() such that the lock is not held around all of rxrpc_wait_for_tx_window*() and (b) indicating to higher callers whether we're return with the lock dropped. Note that this means recvmsg() will not block on this call whilst we're waiting. (3) After dropping and regaining the call mutex, rxrpc_send_data() needs to go and recheck the state of the tx_pending buffer and the tx_total_len check in case we raced with another sendmsg() on the same call. Thinking on this some more, it might make sense to have different locks for sendmsg() and recvmsg(). There's probably no need to make recvmsg() wait for sendmsg(). It does mean that recvmsg() can return MSG_EOR indicating that a call is dead before a sendmsg() to that call returns - but that can currently happen anyway. Without fix (2), something like the following can be induced: WARNING: bad unlock balance detected! 5.16.0-rc6-syzkaller #0 Not tainted ------------------------------------- syz-executor011/3597 is trying to release lock (&call->user_mutex) at: [<ffffffff885163a3>] rxrpc_do_sendmsg+0xc13/0x1350 net/rxrpc/sendmsg.c:748 but there are no more locks to release! other info that might help us debug this: no locks held by syz-executor011/3597. ... Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 print_unlock_imbalance_bug include/trace/events/lock.h:58 [inline] __lock_release kernel/locking/lockdep.c:5306 [inline] lock_release.cold+0x49/0x4e kernel/locking/lockdep.c:5657 __mutex_unlock_slowpath+0x99/0x5e0 kernel/locking/mutex.c:900 rxrpc_do_sendmsg+0xc13/0x1350 net/rxrpc/sendmsg.c:748 rxrpc_sendmsg+0x420/0x630 net/rxrpc/af_rxrpc.c:561 sock_sendmsg_nosec net/socket.c:704 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:724 ____sys_sendmsg+0x6e8/0x810 net/socket.c:2409 ___sys_sendmsg+0xf3/0x170 net/socket.c:2463 __sys_sendmsg+0xe5/0x1b0 net/socket.c:2492 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae [Thanks to Hawkins Jiawei and Khalid Masum for their attempts to fix this]
Impacted products
Vendor Product Version
Linux Linux Version: bc5e3a546d553e5223851fc199e69040eb70f68b
Version: bc5e3a546d553e5223851fc199e69040eb70f68b
Version: bc5e3a546d553e5223851fc199e69040eb70f68b
Version: bc5e3a546d553e5223851fc199e69040eb70f68b
Create a notification for this product.
Show details on NVD website


{
  "containers": {
    "cna": {
      "affected": [
        {
          "defaultStatus": "unaffected",
          "product": "Linux",
          "programFiles": [
            "net/rxrpc/call_object.c",
            "net/rxrpc/sendmsg.c"
          ],
          "repo": "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git",
          "vendor": "Linux",
          "versions": [
            {
              "lessThan": "79e2ca7aa96e80961828ab6312264633b66183cc",
              "status": "affected",
              "version": "bc5e3a546d553e5223851fc199e69040eb70f68b",
              "versionType": "git"
            },
            {
              "lessThan": "2bc769b8edb158be7379d15f36e23d66cf850053",
              "status": "affected",
              "version": "bc5e3a546d553e5223851fc199e69040eb70f68b",
              "versionType": "git"
            },
            {
              "lessThan": "091dc91e119fdd61432347231724f4e861c6b465",
              "status": "affected",
              "version": "bc5e3a546d553e5223851fc199e69040eb70f68b",
              "versionType": "git"
            },
            {
              "lessThan": "b0f571ecd7943423c25947439045f0d352ca3dbf",
              "status": "affected",
              "version": "bc5e3a546d553e5223851fc199e69040eb70f68b",
              "versionType": "git"
            }
          ]
        },
        {
          "defaultStatus": "affected",
          "product": "Linux",
          "programFiles": [
            "net/rxrpc/call_object.c",
            "net/rxrpc/sendmsg.c"
          ],
          "repo": "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git",
          "vendor": "Linux",
          "versions": [
            {
              "status": "affected",
              "version": "4.15"
            },
            {
              "lessThan": "4.15",
              "status": "unaffected",
              "version": "0",
              "versionType": "semver"
            },
            {
              "lessThanOrEqual": "5.10.*",
              "status": "unaffected",
              "version": "5.10.140",
              "versionType": "semver"
            },
            {
              "lessThanOrEqual": "5.15.*",
              "status": "unaffected",
              "version": "5.15.64",
              "versionType": "semver"
            },
            {
              "lessThanOrEqual": "5.19.*",
              "status": "unaffected",
              "version": "5.19.6",
              "versionType": "semver"
            },
            {
              "lessThanOrEqual": "*",
              "status": "unaffected",
              "version": "6.0",
              "versionType": "original_commit_for_fix"
            }
          ]
        }
      ],
      "cpeApplicability": [
        {
          "nodes": [
            {
              "cpeMatch": [
                {
                  "criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                  "versionEndExcluding": "5.10.140",
                  "versionStartIncluding": "4.15",
                  "vulnerable": true
                },
                {
                  "criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                  "versionEndExcluding": "5.15.64",
                  "versionStartIncluding": "4.15",
                  "vulnerable": true
                },
                {
                  "criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                  "versionEndExcluding": "5.19.6",
                  "versionStartIncluding": "4.15",
                  "vulnerable": true
                },
                {
                  "criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                  "versionEndExcluding": "6.0",
                  "versionStartIncluding": "4.15",
                  "vulnerable": true
                }
              ],
              "negate": false,
              "operator": "OR"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "In the Linux kernel, the following vulnerability has been resolved:\n\nrxrpc: Fix locking in rxrpc\u0027s sendmsg\n\nFix three bugs in the rxrpc\u0027s sendmsg implementation:\n\n (1) rxrpc_new_client_call() should release the socket lock when returning\n     an error from rxrpc_get_call_slot().\n\n (2) rxrpc_wait_for_tx_window_intr() will return without the call mutex\n     held in the event that we\u0027re interrupted by a signal whilst waiting\n     for tx space on the socket or relocking the call mutex afterwards.\n\n     Fix this by: (a) moving the unlock/lock of the call mutex up to\n     rxrpc_send_data() such that the lock is not held around all of\n     rxrpc_wait_for_tx_window*() and (b) indicating to higher callers\n     whether we\u0027re return with the lock dropped.  Note that this means\n     recvmsg() will not block on this call whilst we\u0027re waiting.\n\n (3) After dropping and regaining the call mutex, rxrpc_send_data() needs\n     to go and recheck the state of the tx_pending buffer and the\n     tx_total_len check in case we raced with another sendmsg() on the same\n     call.\n\nThinking on this some more, it might make sense to have different locks for\nsendmsg() and recvmsg().  There\u0027s probably no need to make recvmsg() wait\nfor sendmsg().  It does mean that recvmsg() can return MSG_EOR indicating\nthat a call is dead before a sendmsg() to that call returns - but that can\ncurrently happen anyway.\n\nWithout fix (2), something like the following can be induced:\n\n\tWARNING: bad unlock balance detected!\n\t5.16.0-rc6-syzkaller #0 Not tainted\n\t-------------------------------------\n\tsyz-executor011/3597 is trying to release lock (\u0026call-\u003euser_mutex) at:\n\t[\u003cffffffff885163a3\u003e] rxrpc_do_sendmsg+0xc13/0x1350 net/rxrpc/sendmsg.c:748\n\tbut there are no more locks to release!\n\n\tother info that might help us debug this:\n\tno locks held by syz-executor011/3597.\n\t...\n\tCall Trace:\n\t \u003cTASK\u003e\n\t __dump_stack lib/dump_stack.c:88 [inline]\n\t dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106\n\t print_unlock_imbalance_bug include/trace/events/lock.h:58 [inline]\n\t __lock_release kernel/locking/lockdep.c:5306 [inline]\n\t lock_release.cold+0x49/0x4e kernel/locking/lockdep.c:5657\n\t __mutex_unlock_slowpath+0x99/0x5e0 kernel/locking/mutex.c:900\n\t rxrpc_do_sendmsg+0xc13/0x1350 net/rxrpc/sendmsg.c:748\n\t rxrpc_sendmsg+0x420/0x630 net/rxrpc/af_rxrpc.c:561\n\t sock_sendmsg_nosec net/socket.c:704 [inline]\n\t sock_sendmsg+0xcf/0x120 net/socket.c:724\n\t ____sys_sendmsg+0x6e8/0x810 net/socket.c:2409\n\t ___sys_sendmsg+0xf3/0x170 net/socket.c:2463\n\t __sys_sendmsg+0xe5/0x1b0 net/socket.c:2492\n\t do_syscall_x64 arch/x86/entry/common.c:50 [inline]\n\t do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80\n\t entry_SYSCALL_64_after_hwframe+0x44/0xae\n\n[Thanks to Hawkins Jiawei and Khalid Masum for their attempts to fix this]"
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2025-06-18T11:00:57.940Z",
        "orgId": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
        "shortName": "Linux"
      },
      "references": [
        {
          "url": "https://git.kernel.org/stable/c/79e2ca7aa96e80961828ab6312264633b66183cc"
        },
        {
          "url": "https://git.kernel.org/stable/c/2bc769b8edb158be7379d15f36e23d66cf850053"
        },
        {
          "url": "https://git.kernel.org/stable/c/091dc91e119fdd61432347231724f4e861c6b465"
        },
        {
          "url": "https://git.kernel.org/stable/c/b0f571ecd7943423c25947439045f0d352ca3dbf"
        }
      ],
      "title": "rxrpc: Fix locking in rxrpc\u0027s sendmsg",
      "x_generator": {
        "engine": "bippy-1.2.0"
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
    "assignerShortName": "Linux",
    "cveId": "CVE-2022-49998",
    "datePublished": "2025-06-18T11:00:57.940Z",
    "dateReserved": "2025-06-18T10:57:27.387Z",
    "dateUpdated": "2025-06-18T11:00:57.940Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1",
  "vulnerability-lookup:meta": {
    "nvd": "{\"cve\":{\"id\":\"CVE-2022-49998\",\"sourceIdentifier\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\",\"published\":\"2025-06-18T11:15:27.557\",\"lastModified\":\"2025-06-18T13:46:52.973\",\"vulnStatus\":\"Awaiting Analysis\",\"cveTags\":[],\"descriptions\":[{\"lang\":\"en\",\"value\":\"In the Linux kernel, the following vulnerability has been resolved:\\n\\nrxrpc: Fix locking in rxrpc\u0027s sendmsg\\n\\nFix three bugs in the rxrpc\u0027s sendmsg implementation:\\n\\n (1) rxrpc_new_client_call() should release the socket lock when returning\\n     an error from rxrpc_get_call_slot().\\n\\n (2) rxrpc_wait_for_tx_window_intr() will return without the call mutex\\n     held in the event that we\u0027re interrupted by a signal whilst waiting\\n     for tx space on the socket or relocking the call mutex afterwards.\\n\\n     Fix this by: (a) moving the unlock/lock of the call mutex up to\\n     rxrpc_send_data() such that the lock is not held around all of\\n     rxrpc_wait_for_tx_window*() and (b) indicating to higher callers\\n     whether we\u0027re return with the lock dropped.  Note that this means\\n     recvmsg() will not block on this call whilst we\u0027re waiting.\\n\\n (3) After dropping and regaining the call mutex, rxrpc_send_data() needs\\n     to go and recheck the state of the tx_pending buffer and the\\n     tx_total_len check in case we raced with another sendmsg() on the same\\n     call.\\n\\nThinking on this some more, it might make sense to have different locks for\\nsendmsg() and recvmsg().  There\u0027s probably no need to make recvmsg() wait\\nfor sendmsg().  It does mean that recvmsg() can return MSG_EOR indicating\\nthat a call is dead before a sendmsg() to that call returns - but that can\\ncurrently happen anyway.\\n\\nWithout fix (2), something like the following can be induced:\\n\\n\\tWARNING: bad unlock balance detected!\\n\\t5.16.0-rc6-syzkaller #0 Not tainted\\n\\t-------------------------------------\\n\\tsyz-executor011/3597 is trying to release lock (\u0026call-\u003euser_mutex) at:\\n\\t[\u003cffffffff885163a3\u003e] rxrpc_do_sendmsg+0xc13/0x1350 net/rxrpc/sendmsg.c:748\\n\\tbut there are no more locks to release!\\n\\n\\tother info that might help us debug this:\\n\\tno locks held by syz-executor011/3597.\\n\\t...\\n\\tCall Trace:\\n\\t \u003cTASK\u003e\\n\\t __dump_stack lib/dump_stack.c:88 [inline]\\n\\t dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106\\n\\t print_unlock_imbalance_bug include/trace/events/lock.h:58 [inline]\\n\\t __lock_release kernel/locking/lockdep.c:5306 [inline]\\n\\t lock_release.cold+0x49/0x4e kernel/locking/lockdep.c:5657\\n\\t __mutex_unlock_slowpath+0x99/0x5e0 kernel/locking/mutex.c:900\\n\\t rxrpc_do_sendmsg+0xc13/0x1350 net/rxrpc/sendmsg.c:748\\n\\t rxrpc_sendmsg+0x420/0x630 net/rxrpc/af_rxrpc.c:561\\n\\t sock_sendmsg_nosec net/socket.c:704 [inline]\\n\\t sock_sendmsg+0xcf/0x120 net/socket.c:724\\n\\t ____sys_sendmsg+0x6e8/0x810 net/socket.c:2409\\n\\t ___sys_sendmsg+0xf3/0x170 net/socket.c:2463\\n\\t __sys_sendmsg+0xe5/0x1b0 net/socket.c:2492\\n\\t do_syscall_x64 arch/x86/entry/common.c:50 [inline]\\n\\t do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80\\n\\t entry_SYSCALL_64_after_hwframe+0x44/0xae\\n\\n[Thanks to Hawkins Jiawei and Khalid Masum for their attempts to fix this]\"},{\"lang\":\"es\",\"value\":\"En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: rxrpc: Arreglar el bloqueo en sendmsg de rxrpc Corrige tres errores en la implementaci\u00f3n de sendmsg de rxrpc: (1) rxrpc_new_client_call() deber\u00eda liberar el bloqueo del socket al devolver un error de rxrpc_get_call_slot(). (2) rxrpc_wait_for_tx_window_intr() retornar\u00e1 sin el mutex de llamada retenido en caso de que seamos interrumpidos por una se\u00f1al mientras esperamos espacio de transmisi\u00f3n en el socket o volvemos a bloquear el mutex de llamada posteriormente. Corrige esto mediante: (a) mover el desbloqueo/bloqueo del mutex de llamada hasta rxrpc_send_data() de modo que el bloqueo no se mantenga alrededor de todo rxrpc_wait_for_tx_window*() y (b) indicar a los llamadores superiores si retornamos con el bloqueo eliminado. Tenga en cuenta que esto significa que recvmsg() no se bloquear\u00e1 en esta llamada mientras esperamos. (3) Despu\u00e9s de eliminar y recuperar el mutex de llamada, rxrpc_send_data() debe volver a verificar el estado del b\u00fafer tx_pending y la comprobaci\u00f3n de tx_total_len en caso de que hayamos utilizado otro sendmsg() en la misma llamada. Pens\u00e1ndolo bien, podr\u00eda tener sentido tener bloqueos diferentes para sendmsg() y recvmsg(). Probablemente no sea necesario que recvmsg() espere a sendmsg(). Esto significa que recvmsg() puede devolver MSG_EOR, lo que indica que una llamada est\u00e1 inactiva antes de que un sendmsg() a esa llamada regrese, pero eso puede ocurrir de todos modos. Sin la correcci\u00f3n (2), se puede inducir algo como lo siguiente: \u00a1ADVERTENCIA: se detect\u00f3 un saldo de desbloqueo incorrecto! 5.16.0-rc6-syzkaller #0 No contaminado ------------------------------------- syz-executor011/3597 est\u00e1 intentando liberar el bloqueo (\u0026amp;call-\u0026gt;user_mutex) en: [] rxrpc_do_sendmsg+0xc13/0x1350 net/rxrpc/sendmsg.c:748 \u00a1pero no hay m\u00e1s bloqueos para liberar! Otra informaci\u00f3n que podr\u00eda ayudarnos a depurar esto: syz-executor011/3597 no tiene bloqueos. ... Seguimiento de llamadas:  __dump_stack lib/dump_stack.c:88 [en l\u00ednea] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 print_unlock_imbalance_bug include/trace/events/lock.h:58 [en l\u00ednea] __lock_release kernel/locking/lockdep.c:5306 [en l\u00ednea] lock_release.cold+0x49/0x4e kernel/locking/lockdep.c:5657 __mutex_unlock_slowpath+0x99/0x5e0 kernel/locking/mutex.c:900 rxrpc_do_sendmsg+0xc13/0x1350 net/rxrpc/sendmsg.c:748 rxrpc_sendmsg+0x420/0x630 net/rxrpc/af_rxrpc.c:561 sock_sendmsg_nosec net/socket.c:704 [en l\u00ednea] sock_sendmsg+0xcf/0x120 net/socket.c:724 ____sys_sendmsg+0x6e8/0x810 net/socket.c:2409 ___sys_sendmsg+0xf3/0x170 net/socket.c:2463 __sys_sendmsg+0xe5/0x1b0 net/socket.c:2492 do_syscall_x64 arch/x86/entry/common.c:50 [en l\u00ednea] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae [Gracias a Hawkins Jiawei y Khalid Masum por sus intentos de solucionar este problema]\"}],\"metrics\":{},\"references\":[{\"url\":\"https://git.kernel.org/stable/c/091dc91e119fdd61432347231724f4e861c6b465\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\"},{\"url\":\"https://git.kernel.org/stable/c/2bc769b8edb158be7379d15f36e23d66cf850053\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\"},{\"url\":\"https://git.kernel.org/stable/c/79e2ca7aa96e80961828ab6312264633b66183cc\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\"},{\"url\":\"https://git.kernel.org/stable/c/b0f571ecd7943423c25947439045f0d352ca3dbf\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\"}]}}"
  }
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.


Loading…

Loading…