ghsa-rw5q-7v3h-qcvv
Vulnerability from github
Published
2025-05-02 18:31
Modified
2025-05-02 18:31
Details

In the Linux kernel, the following vulnerability has been resolved:

bpf: Adjust insufficient default bpf_jit_limit

We've seen recent AWS EKS (Kubernetes) user reports like the following:

After upgrading EKS nodes from v20230203 to v20230217 on our 1.24 EKS clusters after a few days a number of the nodes have containers stuck in ContainerCreating state or liveness/readiness probes reporting the following error:

Readiness probe errored: rpc error: code = Unknown desc = failed to
exec in container: failed to start exec "4a11039f730203ffc003b7[...]":
OCI runtime exec failed: exec failed: unable to start container process:
unable to init seccomp: error loading seccomp filter into kernel:
error loading seccomp filter: errno 524: unknown

However, we had not been seeing this issue on previous AMIs and it only started to occur on v20230217 (following the upgrade from kernel 5.4 to 5.10) with no other changes to the underlying cluster or workloads.

We tried the suggestions from that issue (sysctl net.core.bpf_jit_limit=452534528) which helped to immediately allow containers to be created and probes to execute but after approximately a day the issue returned and the value returned by cat /proc/vmallocinfo | grep bpf_jit | awk '{s+=$2} END {print s}' was steadily increasing.

I tested bpf tree to observe bpf_jit_charge_modmem, bpf_jit_uncharge_modmem their sizes passed in as well as bpf_jit_current under tcpdump BPF filter, seccomp BPF and native (e)BPF programs, and the behavior all looks sane and expected, that is nothing "leaking" from an upstream perspective.

The bpf_jit_limit knob was originally added in order to avoid a situation where unprivileged applications loading BPF programs (e.g. seccomp BPF policies) consuming all the module memory space via BPF JIT such that loading of kernel modules would be prevented. The default limit was defined back in 2018 and while good enough back then, we are generally seeing far more BPF consumers today.

Adjust the limit for the BPF JIT pool from originally 1/4 to now 1/2 of the module memory space to better reflect today's needs and avoid more users running into potentially hard to debug issues.

Show details on source website


{
  "affected": [],
  "aliases": [
    "CVE-2023-53076"
  ],
  "database_specific": {
    "cwe_ids": [],
    "github_reviewed": false,
    "github_reviewed_at": null,
    "nvd_published_at": "2025-05-02T16:15:26Z",
    "severity": null
  },
  "details": "In the Linux kernel, the following vulnerability has been resolved:\n\nbpf: Adjust insufficient default bpf_jit_limit\n\nWe\u0027ve seen recent AWS EKS (Kubernetes) user reports like the following:\n\n  After upgrading EKS nodes from v20230203 to v20230217 on our 1.24 EKS\n  clusters after a few days a number of the nodes have containers stuck\n  in ContainerCreating state or liveness/readiness probes reporting the\n  following error:\n\n    Readiness probe errored: rpc error: code = Unknown desc = failed to\n    exec in container: failed to start exec \"4a11039f730203ffc003b7[...]\":\n    OCI runtime exec failed: exec failed: unable to start container process:\n    unable to init seccomp: error loading seccomp filter into kernel:\n    error loading seccomp filter: errno 524: unknown\n\n  However, we had not been seeing this issue on previous AMIs and it only\n  started to occur on v20230217 (following the upgrade from kernel 5.4 to\n  5.10) with no other changes to the underlying cluster or workloads.\n\n  We tried the suggestions from that issue (sysctl net.core.bpf_jit_limit=452534528)\n  which helped to immediately allow containers to be created and probes to\n  execute but after approximately a day the issue returned and the value\n  returned by cat /proc/vmallocinfo | grep bpf_jit | awk \u0027{s+=$2} END {print s}\u0027\n  was steadily increasing.\n\nI tested bpf tree to observe bpf_jit_charge_modmem, bpf_jit_uncharge_modmem\ntheir sizes passed in as well as bpf_jit_current under tcpdump BPF filter,\nseccomp BPF and native (e)BPF programs, and the behavior all looks sane\nand expected, that is nothing \"leaking\" from an upstream perspective.\n\nThe bpf_jit_limit knob was originally added in order to avoid a situation\nwhere unprivileged applications loading BPF programs (e.g. seccomp BPF\npolicies) consuming all the module memory space via BPF JIT such that loading\nof kernel modules would be prevented. The default limit was defined back in\n2018 and while good enough back then, we are generally seeing far more BPF\nconsumers today.\n\nAdjust the limit for the BPF JIT pool from originally 1/4 to now 1/2 of the\nmodule memory space to better reflect today\u0027s needs and avoid more users\nrunning into potentially hard to debug issues.",
  "id": "GHSA-rw5q-7v3h-qcvv",
  "modified": "2025-05-02T18:31:35Z",
  "published": "2025-05-02T18:31:35Z",
  "references": [
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2023-53076"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/10ec8ca8ec1a2f04c4ed90897225231c58c124a7"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/374ed036309fce73f9db04c3054018a71912d46b"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/42049e65d338870e93732b0b80c6c41faf6aa781"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/54869daa6a437887614274f65298ba44a3fac63a"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/68ed00a37d2d1c932ff7be40be4b90c4bec48c56"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/9cda812c76067c8a771eae43bb6943481cc7effc"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/a4bbab27c4bf69486f5846d44134eb31c37e9b22"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/d69c2ded95b17d51cc6632c7848cbd476381ecd6"
    }
  ],
  "schema_version": "1.4.0",
  "severity": []
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.


Loading…

Loading…