CWE-1427

Improper Neutralization of Input Used for LLM Prompting

The product uses externally-provided data to build prompts provided to large language models (LLMs), but the way these prompts are constructed causes the LLM to fail to distinguish between user-supplied inputs and developer provided system directives.

CVE-2024-3303 (GCVE-0-2024-3303)
Vulnerability from cvelistv5
Published
2025-02-13 08:31
Modified
2025-02-13 14:36
CWE
  • CWE-1427 - Improper Neutralization of Input Used for LLM Prompting
Summary
An issue was discovered in GitLab EE affecting all versions starting from 16.0 prior to 17.6.5, starting from 17.7 prior to 17.7.4, and starting from 17.8 prior to 17.8.2, which allows an attacker to exfiltrate contents of a private issue using prompt injection.
References
https://gitlab.com/gitlab-org/gitlab/-/issues/454460 issue-tracking, permissions-required
https://hackerone.com/reports/2418620 technical-description, exploit, permissions-required
Impacted products
Vendor Product Version
GitLab GitLab Version: 16.0   
Version: 17.7   
Version: 17.8   
    cpe:2.3:a:gitlab:gitlab:*:*:*:*:*:*:*:*
Create a notification for this product.
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "metrics": [
          {
            "other": {
              "content": {
                "id": "CVE-2024-3303",
                "options": [
                  {
                    "Exploitation": "none"
                  },
                  {
                    "Automatable": "no"
                  },
                  {
                    "Technical Impact": "total"
                  }
                ],
                "role": "CISA Coordinator",
                "timestamp": "2025-02-13T14:35:39.833821Z",
                "version": "2.0.3"
              },
              "type": "ssvc"
            }
          }
        ],
        "providerMetadata": {
          "dateUpdated": "2025-02-13T14:36:00.382Z",
          "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
          "shortName": "CISA-ADP"
        },
        "title": "CISA ADP Vulnrichment"
      }
    ],
    "cna": {
      "affected": [
        {
          "cpes": [
            "cpe:2.3:a:gitlab:gitlab:*:*:*:*:*:*:*:*"
          ],
          "defaultStatus": "unaffected",
          "product": "GitLab",
          "repo": "git://git@gitlab.com:gitlab-org/gitlab.git",
          "vendor": "GitLab",
          "versions": [
            {
              "lessThan": "17.6.5",
              "status": "affected",
              "version": "16.0",
              "versionType": "semver"
            },
            {
              "lessThan": "17.7.4",
              "status": "affected",
              "version": "17.7",
              "versionType": "semver"
            },
            {
              "lessThan": "17.8.2",
              "status": "affected",
              "version": "17.8",
              "versionType": "semver"
            }
          ]
        }
      ],
      "credits": [
        {
          "lang": "en",
          "type": "finder",
          "value": "Thanks [joaxcar](https://hackerone.com/joaxcar) for reporting this vulnerability through our HackerOne bug bounty program"
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "An issue was discovered in GitLab EE affecting all versions starting from 16.0 prior to 17.6.5, starting from 17.7 prior to 17.7.4, and starting from 17.8 prior to 17.8.2, which allows an attacker to exfiltrate contents of a private issue using prompt injection."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "availabilityImpact": "NONE",
            "baseScore": 6.4,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "HIGH",
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "UNCHANGED",
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:H/I:H/A:N",
            "version": "3.1"
          },
          "format": "CVSS",
          "scenarios": [
            {
              "lang": "en",
              "value": "GENERAL"
            }
          ]
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-1427",
              "description": "CWE-1427: Improper Neutralization of Input Used for LLM Prompting",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2025-02-13T08:31:11.062Z",
        "orgId": "ceab7361-8a18-47b1-92ba-4d7d25f6715a",
        "shortName": "GitLab"
      },
      "references": [
        {
          "name": "GitLab Issue #454460",
          "tags": [
            "issue-tracking",
            "permissions-required"
          ],
          "url": "https://gitlab.com/gitlab-org/gitlab/-/issues/454460"
        },
        {
          "name": "HackerOne Bug Bounty Report #2418620",
          "tags": [
            "technical-description",
            "exploit",
            "permissions-required"
          ],
          "url": "https://hackerone.com/reports/2418620"
        }
      ],
      "solutions": [
        {
          "lang": "en",
          "value": "Upgrade to versions 17.6.5, 17.7.4, 17.8.2 or above."
        }
      ],
      "title": "Improper Neutralization of Input Used for LLM Prompting in GitLab"
    }
  },
  "cveMetadata": {
    "assignerOrgId": "ceab7361-8a18-47b1-92ba-4d7d25f6715a",
    "assignerShortName": "GitLab",
    "cveId": "CVE-2024-3303",
    "datePublished": "2025-02-13T08:31:11.062Z",
    "dateReserved": "2024-04-04T10:30:36.615Z",
    "dateUpdated": "2025-02-13T14:36:00.382Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

Mitigation

Phase: Architecture and Design

Description:

  • LLM-enabled applications should be designed to ensure proper sanitization of user-controllable input, ensuring that no intentionally misleading or dangerous characters can be included. Additionally, they should be designed in a way that ensures that user-controllable input is identified as untrusted and potentially dangerous.
Mitigation

Phase: Implementation

Description:

  • LLM prompts should be constructed in a way that effectively differentiates between user-supplied input and developer-constructed system prompting to reduce the chance of model confusion at inference-time.
Mitigation

Phase: Architecture and Design

Description:

  • LLM-enabled applications should be designed to ensure proper sanitization of user-controllable input, ensuring that no intentionally misleading or dangerous characters can be included. Additionally, they should be designed in a way that ensures that user-controllable input is identified as untrusted and potentially dangerous.
Mitigation

Phase: Implementation

Description:

  • Ensure that model training includes training examples that avoid leaking secrets and disregard malicious inputs. Train the model to recognize secrets, and label training data appropriately. Note that due to the non-deterministic nature of prompting LLMs, it is necessary to perform testing of the same test case several times in order to ensure that troublesome behavior is not possible. Additionally, testing should be performed each time a new model is used or a model's weights are updated.
Mitigation

Phases: Installation, Operation

Description:

  • During deployment/operation, use components that operate externally to the system to monitor the output and act as a moderator. These components are called different terms, such as supervisors or guardrails.
Mitigation

Phase: System Configuration

Description:

  • During system configuration, the model could be fine-tuned to better control and neutralize potentially dangerous inputs.

No CAPEC attack patterns related to this CWE.

Back to CWE stats page