fkie_cve-2024-53058
Vulnerability from fkie_nvd
Published
2024-11-19 18:15
Modified
2024-11-22 17:53
Summary
In the Linux kernel, the following vulnerability has been resolved: net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data In case the non-paged data of a SKB carries protocol header and protocol payload to be transmitted on a certain platform that the DMA AXI address width is configured to 40-bit/48-bit, or the size of the non-paged data is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI address width is configured to 32-bit, then this SKB requires at least two DMA transmit descriptors to serve it. For example, three descriptors are allocated to split one DMA buffer mapped from one piece of non-paged data: dma_desc[N + 0], dma_desc[N + 1], dma_desc[N + 2]. Then three elements of tx_q->tx_skbuff_dma[] will be allocated to hold extra information to be reused in stmmac_tx_clean(): tx_q->tx_skbuff_dma[N + 0], tx_q->tx_skbuff_dma[N + 1], tx_q->tx_skbuff_dma[N + 2]. Now we focus on tx_q->tx_skbuff_dma[entry].buf, which is the DMA buffer address returned by DMA mapping call. stmmac_tx_clean() will try to unmap the DMA buffer _ONLY_IF_ tx_q->tx_skbuff_dma[entry].buf is a valid buffer address. The expected behavior that saves DMA buffer address of this non-paged data to tx_q->tx_skbuff_dma[entry].buf is: tx_q->tx_skbuff_dma[N + 0].buf = NULL; tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = dma_map_single(); Unfortunately, the current code misbehaves like this: tx_q->tx_skbuff_dma[N + 0].buf = dma_map_single(); tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = NULL; On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the DMA engine, tx_q->tx_skbuff_dma[N + 0].buf is a valid buffer address obviously, then the DMA buffer will be unmapped immediately. There may be a rare case that the DMA engine does not finish the pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go horribly wrong, DMA is going to access a unmapped/unreferenced memory region, corrupted data will be transmited or iommu fault will be triggered :( In contrast, the for-loop that maps SKB fragments behaves perfectly as expected, and that is how the driver should do for both non-paged data and paged frags actually. This patch corrects DMA map/unmap sequences by fixing the array index for tx_q->tx_skbuff_dma[entry].buf when assigning DMA buffer address. Tested and verified on DWXGMAC CORE 3.20a



{
  "configurations": [
    {
      "nodes": [
        {
          "cpeMatch": [
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
              "matchCriteriaId": "F7B2EF6A-A80D-4A30-B1E9-7DBA47DFA518",
              "versionEndExcluding": "5.15.171",
              "versionStartIncluding": "4.7",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
              "matchCriteriaId": "43EFDC15-E4D4-4F1E-B70D-62F0854BFDF3",
              "versionEndExcluding": "6.1.116",
              "versionStartIncluding": "5.16",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
              "matchCriteriaId": "75088E5E-2400-4D20-915F-7A65C55D9CCD",
              "versionEndExcluding": "6.6.60",
              "versionStartIncluding": "6.2",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
              "matchCriteriaId": "E96F53A4-5E87-4A70-BD9A-BC327828D57F",
              "versionEndExcluding": "6.11.7",
              "versionStartIncluding": "6.7",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:6.12:rc1:*:*:*:*:*:*",
              "matchCriteriaId": "7F361E1D-580F-4A2D-A509-7615F73167A1",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:6.12:rc2:*:*:*:*:*:*",
              "matchCriteriaId": "925478D0-3E3D-4E6F-ACD5-09F28D5DF82C",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:6.12:rc3:*:*:*:*:*:*",
              "matchCriteriaId": "3C95E234-D335-4B6C-96BF-E2CEBD8654ED",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:6.12:rc4:*:*:*:*:*:*",
              "matchCriteriaId": "E0F717D8-3014-4F84-8086-0124B2111379",
              "vulnerable": true
            },
            {
              "criteria": "cpe:2.3:o:linux:linux_kernel:6.12:rc5:*:*:*:*:*:*",
              "matchCriteriaId": "24DBE6C7-2AAE-4818-AED2-E131F153D2FA",
              "vulnerable": true
            }
          ],
          "negate": false,
          "operator": "OR"
        }
      ]
    }
  ],
  "cveTags": [],
  "descriptions": [
    {
      "lang": "en",
      "value": "In the Linux kernel, the following vulnerability has been resolved:\n\nnet: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data\n\nIn case the non-paged data of a SKB carries protocol header and protocol\npayload to be transmitted on a certain platform that the DMA AXI address\nwidth is configured to 40-bit/48-bit, or the size of the non-paged data\nis bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI\naddress width is configured to 32-bit, then this SKB requires at least\ntwo DMA transmit descriptors to serve it.\n\nFor example, three descriptors are allocated to split one DMA buffer\nmapped from one piece of non-paged data:\n    dma_desc[N + 0],\n    dma_desc[N + 1],\n    dma_desc[N + 2].\nThen three elements of tx_q-\u003etx_skbuff_dma[] will be allocated to hold\nextra information to be reused in stmmac_tx_clean():\n    tx_q-\u003etx_skbuff_dma[N + 0],\n    tx_q-\u003etx_skbuff_dma[N + 1],\n    tx_q-\u003etx_skbuff_dma[N + 2].\nNow we focus on tx_q-\u003etx_skbuff_dma[entry].buf, which is the DMA buffer\naddress returned by DMA mapping call. stmmac_tx_clean() will try to\nunmap the DMA buffer _ONLY_IF_ tx_q-\u003etx_skbuff_dma[entry].buf\nis a valid buffer address.\n\nThe expected behavior that saves DMA buffer address of this non-paged\ndata to tx_q-\u003etx_skbuff_dma[entry].buf is:\n    tx_q-\u003etx_skbuff_dma[N + 0].buf = NULL;\n    tx_q-\u003etx_skbuff_dma[N + 1].buf = NULL;\n    tx_q-\u003etx_skbuff_dma[N + 2].buf = dma_map_single();\nUnfortunately, the current code misbehaves like this:\n    tx_q-\u003etx_skbuff_dma[N + 0].buf = dma_map_single();\n    tx_q-\u003etx_skbuff_dma[N + 1].buf = NULL;\n    tx_q-\u003etx_skbuff_dma[N + 2].buf = NULL;\n\nOn the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the\nDMA engine, tx_q-\u003etx_skbuff_dma[N + 0].buf is a valid buffer address\nobviously, then the DMA buffer will be unmapped immediately.\nThere may be a rare case that the DMA engine does not finish the\npending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go\nhorribly wrong, DMA is going to access a unmapped/unreferenced memory\nregion, corrupted data will be transmited or iommu fault will be\ntriggered :(\n\nIn contrast, the for-loop that maps SKB fragments behaves perfectly\nas expected, and that is how the driver should do for both non-paged\ndata and paged frags actually.\n\nThis patch corrects DMA map/unmap sequences by fixing the array index\nfor tx_q-\u003etx_skbuff_dma[entry].buf when assigning DMA buffer address.\n\nTested and verified on DWXGMAC CORE 3.20a"
    },
    {
      "lang": "es",
      "value": "En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: net: stmmac: TSO: Fix DMA map/unmap no balanceado para datos SKB no paginados En caso de que los datos no paginados de un SKB lleven encabezado de protocolo y payload de protocolo para ser transmitidos en una determinada plataforma que el ancho de direcci\u00f3n DMA AXI est\u00e1 configurado a 40 bits/48 bits, o el tama\u00f1o de los datos no paginados es mayor que TSO_MAX_BUFF_SIZE en una determinada plataforma que el ancho de direcci\u00f3n DMA AXI est\u00e1 configurado a 32 bits, entonces este SKB requiere al menos dos descriptores de transmisi\u00f3n DMA para servirlo. Por ejemplo, se asignan tres descriptores para dividir un buffer DMA mapeado a partir de una pieza de datos no paginados: dma_desc[N + 0], dma_desc[N + 1], dma_desc[N + 2]. Luego, se asignar\u00e1n tres elementos de tx_q-\u0026gt;tx_skbuff_dma[] para almacenar informaci\u00f3n adicional que se reutilizar\u00e1 en stmmac_tx_clean(): tx_q-\u0026gt;tx_skbuff_dma[N + 0], tx_q-\u0026gt;tx_skbuff_dma[N + 1], tx_q-\u0026gt;tx_skbuff_dma[N + 2]. Ahora nos centramos en tx_q-\u0026gt;tx_skbuff_dma[entry].buf, que es la direcci\u00f3n del b\u00fafer DMA devuelta por la llamada de mapeo DMA. stmmac_tx_clean() intentar\u00e1 desasignar el b\u00fafer DMA _SOLO_SI_ tx_q-\u0026gt;tx_skbuff_dma[entry].buf es una direcci\u00f3n de b\u00fafer v\u00e1lida. El comportamiento esperado que guarda la direcci\u00f3n del buffer DMA de estos datos no paginados en tx_q-\u0026gt;tx_skbuff_dma[entrada].buf es: tx_q-\u0026gt;tx_skbuff_dma[N + 0].buf = NULL; tx_q-\u0026gt;tx_skbuff_dma[N + 1].buf = NULL; tx_q-\u0026gt;tx_skbuff_dma[N + 2].buf = dma_map_single(); Desafortunadamente, el c\u00f3digo actual se comporta mal de esta manera: tx_q-\u0026gt;tx_skbuff_dma[N + 0].buf = dma_map_single(); tx_q-\u0026gt;tx_skbuff_dma[N + 1].buf = NULL; tx_q-\u0026gt;tx_skbuff_dma[N + 2].buf = NULL; En el lado stmmac_tx_clean(), cuando el motor DMA cierra dma_desc[N + 0], tx_q-\u0026gt;tx_skbuff_dma[N + 0].buf es obviamente una direcci\u00f3n de b\u00fafer v\u00e1lida, entonces el b\u00fafer DMA se desasignar\u00e1 inmediatamente. Puede haber un caso poco com\u00fan en el que el motor DMA no finalice a\u00fan los dma_desc[N + 1], dma_desc[N + 2] pendientes. Ahora las cosas saldr\u00e1n terriblemente mal, DMA acceder\u00e1 a una regi\u00f3n de memoria no mapeada/no referenciada, se transmitir\u00e1n datos corruptos o se activar\u00e1 un error de iommu :( Por el contrario, el bucle for que mapea fragmentos SKB se comporta perfectamente como se espera, y as\u00ed es como el controlador deber\u00eda funcionar tanto para datos no paginados como para fragmentos paginados en realidad. Este parche corrige las secuencias de mapeo/desasignamiento de DMA al arreglar el \u00edndice de matriz para tx_q-\u0026gt;tx_skbuff_dma[entry].buf al asignar la direcci\u00f3n del b\u00fafer de DMA. Probado y verificado en DWXGMAC CORE 3.20a"
    }
  ],
  "id": "CVE-2024-53058",
  "lastModified": "2024-11-22T17:53:32.500",
  "metrics": {
    "cvssMetricV31": [
      {
        "cvssData": {
          "attackComplexity": "LOW",
          "attackVector": "LOCAL",
          "availabilityImpact": "HIGH",
          "baseScore": 5.5,
          "baseSeverity": "MEDIUM",
          "confidentialityImpact": "NONE",
          "integrityImpact": "NONE",
          "privilegesRequired": "LOW",
          "scope": "UNCHANGED",
          "userInteraction": "NONE",
          "vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H",
          "version": "3.1"
        },
        "exploitabilityScore": 1.8,
        "impactScore": 3.6,
        "source": "nvd@nist.gov",
        "type": "Primary"
      }
    ]
  },
  "published": "2024-11-19T18:15:25.767",
  "references": [
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "tags": [
        "Patch"
      ],
      "url": "https://git.kernel.org/stable/c/07c9c26e37542486e34d767505e842f48f29c3f6"
    },
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "tags": [
        "Patch"
      ],
      "url": "https://git.kernel.org/stable/c/58d23d835eb498336716cca55b5714191a309286"
    },
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "tags": [
        "Patch"
      ],
      "url": "https://git.kernel.org/stable/c/66600fac7a984dea4ae095411f644770b2561ede"
    },
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "tags": [
        "Patch"
      ],
      "url": "https://git.kernel.org/stable/c/a3ff23f7c3f0e13f718900803e090fd3997d6bc9"
    },
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "tags": [
        "Patch"
      ],
      "url": "https://git.kernel.org/stable/c/ece593fc9c00741b682869d3f3dc584d37b7c9df"
    }
  ],
  "sourceIdentifier": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
  "vulnStatus": "Analyzed",
  "weaknesses": [
    {
      "description": [
        {
          "lang": "en",
          "value": "NVD-CWE-noinfo"
        }
      ],
      "source": "nvd@nist.gov",
      "type": "Primary"
    }
  ]
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.


Loading…

Loading…