gsd-2021-46987
Vulnerability from gsd
Modified
2024-02-28 06:03
Details
In the Linux kernel, the following vulnerability has been resolved: btrfs: fix deadlock when cloning inline extents and using qgroups There are a few exceptional cases where cloning an inline extent needs to copy the inline extent data into a page of the destination inode. When this happens, we end up starting a transaction while having a dirty page for the destination inode and while having the range locked in the destination's inode iotree too. Because when reserving metadata space for a transaction we may need to flush existing delalloc in case there is not enough free space, we have a mechanism in place to prevent a deadlock, which was introduced in commit 3d45f221ce627d ("btrfs: fix deadlock when cloning inline extent and low on free metadata space"). However when using qgroups, a transaction also reserves metadata qgroup space, which can also result in flushing delalloc in case there is not enough available space at the moment. When this happens we deadlock, since flushing delalloc requires locking the file range in the inode's iotree and the range was already locked at the very beginning of the clone operation, before attempting to start the transaction. When this issue happens, stack traces like the following are reported: [72747.556262] task:kworker/u81:9 state:D stack: 0 pid: 225 ppid: 2 flags:0x00004000 [72747.556268] Workqueue: writeback wb_workfn (flush-btrfs-1142) [72747.556271] Call Trace: [72747.556273] __schedule+0x296/0x760 [72747.556277] schedule+0x3c/0xa0 [72747.556279] io_schedule+0x12/0x40 [72747.556284] __lock_page+0x13c/0x280 [72747.556287] ? generic_file_readonly_mmap+0x70/0x70 [72747.556325] extent_write_cache_pages+0x22a/0x440 [btrfs] [72747.556331] ? __set_page_dirty_nobuffers+0xe7/0x160 [72747.556358] ? set_extent_buffer_dirty+0x5e/0x80 [btrfs] [72747.556362] ? update_group_capacity+0x25/0x210 [72747.556366] ? cpumask_next_and+0x1a/0x20 [72747.556391] extent_writepages+0x44/0xa0 [btrfs] [72747.556394] do_writepages+0x41/0xd0 [72747.556398] __writeback_single_inode+0x39/0x2a0 [72747.556403] writeback_sb_inodes+0x1ea/0x440 [72747.556407] __writeback_inodes_wb+0x5f/0xc0 [72747.556410] wb_writeback+0x235/0x2b0 [72747.556414] ? get_nr_inodes+0x35/0x50 [72747.556417] wb_workfn+0x354/0x490 [72747.556420] ? newidle_balance+0x2c5/0x3e0 [72747.556424] process_one_work+0x1aa/0x340 [72747.556426] worker_thread+0x30/0x390 [72747.556429] ? create_worker+0x1a0/0x1a0 [72747.556432] kthread+0x116/0x130 [72747.556435] ? kthread_park+0x80/0x80 [72747.556438] ret_from_fork+0x1f/0x30 [72747.566958] Workqueue: btrfs-flush_delalloc btrfs_work_helper [btrfs] [72747.566961] Call Trace: [72747.566964] __schedule+0x296/0x760 [72747.566968] ? finish_wait+0x80/0x80 [72747.566970] schedule+0x3c/0xa0 [72747.566995] wait_extent_bit.constprop.68+0x13b/0x1c0 [btrfs] [72747.566999] ? finish_wait+0x80/0x80 [72747.567024] lock_extent_bits+0x37/0x90 [btrfs] [72747.567047] btrfs_invalidatepage+0x299/0x2c0 [btrfs] [72747.567051] ? find_get_pages_range_tag+0x2cd/0x380 [72747.567076] __extent_writepage+0x203/0x320 [btrfs] [72747.567102] extent_write_cache_pages+0x2bb/0x440 [btrfs] [72747.567106] ? update_load_avg+0x7e/0x5f0 [72747.567109] ? enqueue_entity+0xf4/0x6f0 [72747.567134] extent_writepages+0x44/0xa0 [btrfs] [72747.567137] ? enqueue_task_fair+0x93/0x6f0 [72747.567140] do_writepages+0x41/0xd0 [72747.567144] __filemap_fdatawrite_range+0xc7/0x100 [72747.567167] btrfs_run_delalloc_work+0x17/0x40 [btrfs] [72747.567195] btrfs_work_helper+0xc2/0x300 [btrfs] [72747.567200] process_one_work+0x1aa/0x340 [72747.567202] worker_thread+0x30/0x390 [72747.567205] ? create_worker+0x1a0/0x1a0 [72747.567208] kthread+0x116/0x130 [72747.567211] ? kthread_park+0x80/0x80 [72747.567214] ret_from_fork+0x1f/0x30 [72747.569686] task:fsstress state:D stack: ---truncated---
Aliases



{
  "gsd": {
    "metadata": {
      "exploitCode": "unknown",
      "remediation": "unknown",
      "reportConfidence": "confirmed",
      "type": "vulnerability"
    },
    "osvSchema": {
      "aliases": [
        "CVE-2021-46987"
      ],
      "details": "In the Linux kernel, the following vulnerability has been resolved:\n\nbtrfs: fix deadlock when cloning inline extents and using qgroups\n\nThere are a few exceptional cases where cloning an inline extent needs to\ncopy the inline extent data into a page of the destination inode.\n\nWhen this happens, we end up starting a transaction while having a dirty\npage for the destination inode and while having the range locked in the\ndestination\u0027s inode iotree too. Because when reserving metadata space\nfor a transaction we may need to flush existing delalloc in case there is\nnot enough free space, we have a mechanism in place to prevent a deadlock,\nwhich was introduced in commit 3d45f221ce627d (\"btrfs: fix deadlock when\ncloning inline extent and low on free metadata space\").\n\nHowever when using qgroups, a transaction also reserves metadata qgroup\nspace, which can also result in flushing delalloc in case there is not\nenough available space at the moment. When this happens we deadlock, since\nflushing delalloc requires locking the file range in the inode\u0027s iotree\nand the range was already locked at the very beginning of the clone\noperation, before attempting to start the transaction.\n\nWhen this issue happens, stack traces like the following are reported:\n\n  [72747.556262] task:kworker/u81:9   state:D stack:    0 pid:  225 ppid:     2 flags:0x00004000\n  [72747.556268] Workqueue: writeback wb_workfn (flush-btrfs-1142)\n  [72747.556271] Call Trace:\n  [72747.556273]  __schedule+0x296/0x760\n  [72747.556277]  schedule+0x3c/0xa0\n  [72747.556279]  io_schedule+0x12/0x40\n  [72747.556284]  __lock_page+0x13c/0x280\n  [72747.556287]  ? generic_file_readonly_mmap+0x70/0x70\n  [72747.556325]  extent_write_cache_pages+0x22a/0x440 [btrfs]\n  [72747.556331]  ? __set_page_dirty_nobuffers+0xe7/0x160\n  [72747.556358]  ? set_extent_buffer_dirty+0x5e/0x80 [btrfs]\n  [72747.556362]  ? update_group_capacity+0x25/0x210\n  [72747.556366]  ? cpumask_next_and+0x1a/0x20\n  [72747.556391]  extent_writepages+0x44/0xa0 [btrfs]\n  [72747.556394]  do_writepages+0x41/0xd0\n  [72747.556398]  __writeback_single_inode+0x39/0x2a0\n  [72747.556403]  writeback_sb_inodes+0x1ea/0x440\n  [72747.556407]  __writeback_inodes_wb+0x5f/0xc0\n  [72747.556410]  wb_writeback+0x235/0x2b0\n  [72747.556414]  ? get_nr_inodes+0x35/0x50\n  [72747.556417]  wb_workfn+0x354/0x490\n  [72747.556420]  ? newidle_balance+0x2c5/0x3e0\n  [72747.556424]  process_one_work+0x1aa/0x340\n  [72747.556426]  worker_thread+0x30/0x390\n  [72747.556429]  ? create_worker+0x1a0/0x1a0\n  [72747.556432]  kthread+0x116/0x130\n  [72747.556435]  ? kthread_park+0x80/0x80\n  [72747.556438]  ret_from_fork+0x1f/0x30\n\n  [72747.566958] Workqueue: btrfs-flush_delalloc btrfs_work_helper [btrfs]\n  [72747.566961] Call Trace:\n  [72747.566964]  __schedule+0x296/0x760\n  [72747.566968]  ? finish_wait+0x80/0x80\n  [72747.566970]  schedule+0x3c/0xa0\n  [72747.566995]  wait_extent_bit.constprop.68+0x13b/0x1c0 [btrfs]\n  [72747.566999]  ? finish_wait+0x80/0x80\n  [72747.567024]  lock_extent_bits+0x37/0x90 [btrfs]\n  [72747.567047]  btrfs_invalidatepage+0x299/0x2c0 [btrfs]\n  [72747.567051]  ? find_get_pages_range_tag+0x2cd/0x380\n  [72747.567076]  __extent_writepage+0x203/0x320 [btrfs]\n  [72747.567102]  extent_write_cache_pages+0x2bb/0x440 [btrfs]\n  [72747.567106]  ? update_load_avg+0x7e/0x5f0\n  [72747.567109]  ? enqueue_entity+0xf4/0x6f0\n  [72747.567134]  extent_writepages+0x44/0xa0 [btrfs]\n  [72747.567137]  ? enqueue_task_fair+0x93/0x6f0\n  [72747.567140]  do_writepages+0x41/0xd0\n  [72747.567144]  __filemap_fdatawrite_range+0xc7/0x100\n  [72747.567167]  btrfs_run_delalloc_work+0x17/0x40 [btrfs]\n  [72747.567195]  btrfs_work_helper+0xc2/0x300 [btrfs]\n  [72747.567200]  process_one_work+0x1aa/0x340\n  [72747.567202]  worker_thread+0x30/0x390\n  [72747.567205]  ? create_worker+0x1a0/0x1a0\n  [72747.567208]  kthread+0x116/0x130\n  [72747.567211]  ? kthread_park+0x80/0x80\n  [72747.567214]  ret_from_fork+0x1f/0x30\n\n  [72747.569686] task:fsstress        state:D stack:    \n---truncated---",
      "id": "GSD-2021-46987",
      "modified": "2024-02-28T06:03:57.586942Z",
      "schema_version": "1.4.0"
    }
  },
  "namespaces": {
    "cve.org": {
      "CVE_data_meta": {
        "ASSIGNER": "cve@kernel.org",
        "ID": "CVE-2021-46987",
        "STATE": "PUBLIC"
      },
      "affects": {
        "vendor": {
          "vendor_data": [
            {
              "product": {
                "product_data": [
                  {
                    "product_name": "Linux",
                    "version": {
                      "version_data": [
                        {
                          "version_affected": "\u003c",
                          "version_name": "c53e9653605d",
                          "version_value": "d5347827d0b4"
                        },
                        {
                          "version_value": "not down converted",
                          "x_cve_json_5_version_data": {
                            "defaultStatus": "affected",
                            "versions": [
                              {
                                "status": "affected",
                                "version": "5.9"
                              },
                              {
                                "lessThan": "5.9",
                                "status": "unaffected",
                                "version": "0",
                                "versionType": "custom"
                              },
                              {
                                "lessThanOrEqual": "5.11.*",
                                "status": "unaffected",
                                "version": "5.11.22",
                                "versionType": "custom"
                              },
                              {
                                "lessThanOrEqual": "5.12.*",
                                "status": "unaffected",
                                "version": "5.12.5",
                                "versionType": "custom"
                              },
                              {
                                "lessThanOrEqual": "*",
                                "status": "unaffected",
                                "version": "5.13",
                                "versionType": "original_commit_for_fix"
                              }
                            ]
                          }
                        }
                      ]
                    }
                  }
                ]
              },
              "vendor_name": "Linux"
            }
          ]
        }
      },
      "data_format": "MITRE",
      "data_type": "CVE",
      "data_version": "4.0",
      "description": {
        "description_data": [
          {
            "lang": "eng",
            "value": "In the Linux kernel, the following vulnerability has been resolved:\n\nbtrfs: fix deadlock when cloning inline extents and using qgroups\n\nThere are a few exceptional cases where cloning an inline extent needs to\ncopy the inline extent data into a page of the destination inode.\n\nWhen this happens, we end up starting a transaction while having a dirty\npage for the destination inode and while having the range locked in the\ndestination\u0027s inode iotree too. Because when reserving metadata space\nfor a transaction we may need to flush existing delalloc in case there is\nnot enough free space, we have a mechanism in place to prevent a deadlock,\nwhich was introduced in commit 3d45f221ce627d (\"btrfs: fix deadlock when\ncloning inline extent and low on free metadata space\").\n\nHowever when using qgroups, a transaction also reserves metadata qgroup\nspace, which can also result in flushing delalloc in case there is not\nenough available space at the moment. When this happens we deadlock, since\nflushing delalloc requires locking the file range in the inode\u0027s iotree\nand the range was already locked at the very beginning of the clone\noperation, before attempting to start the transaction.\n\nWhen this issue happens, stack traces like the following are reported:\n\n  [72747.556262] task:kworker/u81:9   state:D stack:    0 pid:  225 ppid:     2 flags:0x00004000\n  [72747.556268] Workqueue: writeback wb_workfn (flush-btrfs-1142)\n  [72747.556271] Call Trace:\n  [72747.556273]  __schedule+0x296/0x760\n  [72747.556277]  schedule+0x3c/0xa0\n  [72747.556279]  io_schedule+0x12/0x40\n  [72747.556284]  __lock_page+0x13c/0x280\n  [72747.556287]  ? generic_file_readonly_mmap+0x70/0x70\n  [72747.556325]  extent_write_cache_pages+0x22a/0x440 [btrfs]\n  [72747.556331]  ? __set_page_dirty_nobuffers+0xe7/0x160\n  [72747.556358]  ? set_extent_buffer_dirty+0x5e/0x80 [btrfs]\n  [72747.556362]  ? update_group_capacity+0x25/0x210\n  [72747.556366]  ? cpumask_next_and+0x1a/0x20\n  [72747.556391]  extent_writepages+0x44/0xa0 [btrfs]\n  [72747.556394]  do_writepages+0x41/0xd0\n  [72747.556398]  __writeback_single_inode+0x39/0x2a0\n  [72747.556403]  writeback_sb_inodes+0x1ea/0x440\n  [72747.556407]  __writeback_inodes_wb+0x5f/0xc0\n  [72747.556410]  wb_writeback+0x235/0x2b0\n  [72747.556414]  ? get_nr_inodes+0x35/0x50\n  [72747.556417]  wb_workfn+0x354/0x490\n  [72747.556420]  ? newidle_balance+0x2c5/0x3e0\n  [72747.556424]  process_one_work+0x1aa/0x340\n  [72747.556426]  worker_thread+0x30/0x390\n  [72747.556429]  ? create_worker+0x1a0/0x1a0\n  [72747.556432]  kthread+0x116/0x130\n  [72747.556435]  ? kthread_park+0x80/0x80\n  [72747.556438]  ret_from_fork+0x1f/0x30\n\n  [72747.566958] Workqueue: btrfs-flush_delalloc btrfs_work_helper [btrfs]\n  [72747.566961] Call Trace:\n  [72747.566964]  __schedule+0x296/0x760\n  [72747.566968]  ? finish_wait+0x80/0x80\n  [72747.566970]  schedule+0x3c/0xa0\n  [72747.566995]  wait_extent_bit.constprop.68+0x13b/0x1c0 [btrfs]\n  [72747.566999]  ? finish_wait+0x80/0x80\n  [72747.567024]  lock_extent_bits+0x37/0x90 [btrfs]\n  [72747.567047]  btrfs_invalidatepage+0x299/0x2c0 [btrfs]\n  [72747.567051]  ? find_get_pages_range_tag+0x2cd/0x380\n  [72747.567076]  __extent_writepage+0x203/0x320 [btrfs]\n  [72747.567102]  extent_write_cache_pages+0x2bb/0x440 [btrfs]\n  [72747.567106]  ? update_load_avg+0x7e/0x5f0\n  [72747.567109]  ? enqueue_entity+0xf4/0x6f0\n  [72747.567134]  extent_writepages+0x44/0xa0 [btrfs]\n  [72747.567137]  ? enqueue_task_fair+0x93/0x6f0\n  [72747.567140]  do_writepages+0x41/0xd0\n  [72747.567144]  __filemap_fdatawrite_range+0xc7/0x100\n  [72747.567167]  btrfs_run_delalloc_work+0x17/0x40 [btrfs]\n  [72747.567195]  btrfs_work_helper+0xc2/0x300 [btrfs]\n  [72747.567200]  process_one_work+0x1aa/0x340\n  [72747.567202]  worker_thread+0x30/0x390\n  [72747.567205]  ? create_worker+0x1a0/0x1a0\n  [72747.567208]  kthread+0x116/0x130\n  [72747.567211]  ? kthread_park+0x80/0x80\n  [72747.567214]  ret_from_fork+0x1f/0x30\n\n  [72747.569686] task:fsstress        state:D stack:    \n---truncated---"
          }
        ]
      },
      "generator": {
        "engine": "bippy-1e70cc10feda"
      },
      "problemtype": {
        "problemtype_data": [
          {
            "description": [
              {
                "lang": "eng",
                "value": "n/a"
              }
            ]
          }
        ]
      },
      "references": {
        "reference_data": [
          {
            "name": "https://git.kernel.org/stable/c/d5347827d0b4b2250cbce6eccaa1c81dc78d8651",
            "refsource": "MISC",
            "url": "https://git.kernel.org/stable/c/d5347827d0b4b2250cbce6eccaa1c81dc78d8651"
          },
          {
            "name": "https://git.kernel.org/stable/c/96157707c0420e3d3edfe046f1cc797fee117ade",
            "refsource": "MISC",
            "url": "https://git.kernel.org/stable/c/96157707c0420e3d3edfe046f1cc797fee117ade"
          },
          {
            "name": "https://git.kernel.org/stable/c/f9baa501b4fd6962257853d46ddffbc21f27e344",
            "refsource": "MISC",
            "url": "https://git.kernel.org/stable/c/f9baa501b4fd6962257853d46ddffbc21f27e344"
          }
        ]
      }
    },
    "nvd.nist.gov": {
      "cve": {
        "descriptions": [
          {
            "lang": "en",
            "value": "In the Linux kernel, the following vulnerability has been resolved:\n\nbtrfs: fix deadlock when cloning inline extents and using qgroups\n\nThere are a few exceptional cases where cloning an inline extent needs to\ncopy the inline extent data into a page of the destination inode.\n\nWhen this happens, we end up starting a transaction while having a dirty\npage for the destination inode and while having the range locked in the\ndestination\u0027s inode iotree too. Because when reserving metadata space\nfor a transaction we may need to flush existing delalloc in case there is\nnot enough free space, we have a mechanism in place to prevent a deadlock,\nwhich was introduced in commit 3d45f221ce627d (\"btrfs: fix deadlock when\ncloning inline extent and low on free metadata space\").\n\nHowever when using qgroups, a transaction also reserves metadata qgroup\nspace, which can also result in flushing delalloc in case there is not\nenough available space at the moment. When this happens we deadlock, since\nflushing delalloc requires locking the file range in the inode\u0027s iotree\nand the range was already locked at the very beginning of the clone\noperation, before attempting to start the transaction.\n\nWhen this issue happens, stack traces like the following are reported:\n\n  [72747.556262] task:kworker/u81:9   state:D stack:    0 pid:  225 ppid:     2 flags:0x00004000\n  [72747.556268] Workqueue: writeback wb_workfn (flush-btrfs-1142)\n  [72747.556271] Call Trace:\n  [72747.556273]  __schedule+0x296/0x760\n  [72747.556277]  schedule+0x3c/0xa0\n  [72747.556279]  io_schedule+0x12/0x40\n  [72747.556284]  __lock_page+0x13c/0x280\n  [72747.556287]  ? generic_file_readonly_mmap+0x70/0x70\n  [72747.556325]  extent_write_cache_pages+0x22a/0x440 [btrfs]\n  [72747.556331]  ? __set_page_dirty_nobuffers+0xe7/0x160\n  [72747.556358]  ? set_extent_buffer_dirty+0x5e/0x80 [btrfs]\n  [72747.556362]  ? update_group_capacity+0x25/0x210\n  [72747.556366]  ? cpumask_next_and+0x1a/0x20\n  [72747.556391]  extent_writepages+0x44/0xa0 [btrfs]\n  [72747.556394]  do_writepages+0x41/0xd0\n  [72747.556398]  __writeback_single_inode+0x39/0x2a0\n  [72747.556403]  writeback_sb_inodes+0x1ea/0x440\n  [72747.556407]  __writeback_inodes_wb+0x5f/0xc0\n  [72747.556410]  wb_writeback+0x235/0x2b0\n  [72747.556414]  ? get_nr_inodes+0x35/0x50\n  [72747.556417]  wb_workfn+0x354/0x490\n  [72747.556420]  ? newidle_balance+0x2c5/0x3e0\n  [72747.556424]  process_one_work+0x1aa/0x340\n  [72747.556426]  worker_thread+0x30/0x390\n  [72747.556429]  ? create_worker+0x1a0/0x1a0\n  [72747.556432]  kthread+0x116/0x130\n  [72747.556435]  ? kthread_park+0x80/0x80\n  [72747.556438]  ret_from_fork+0x1f/0x30\n\n  [72747.566958] Workqueue: btrfs-flush_delalloc btrfs_work_helper [btrfs]\n  [72747.566961] Call Trace:\n  [72747.566964]  __schedule+0x296/0x760\n  [72747.566968]  ? finish_wait+0x80/0x80\n  [72747.566970]  schedule+0x3c/0xa0\n  [72747.566995]  wait_extent_bit.constprop.68+0x13b/0x1c0 [btrfs]\n  [72747.566999]  ? finish_wait+0x80/0x80\n  [72747.567024]  lock_extent_bits+0x37/0x90 [btrfs]\n  [72747.567047]  btrfs_invalidatepage+0x299/0x2c0 [btrfs]\n  [72747.567051]  ? find_get_pages_range_tag+0x2cd/0x380\n  [72747.567076]  __extent_writepage+0x203/0x320 [btrfs]\n  [72747.567102]  extent_write_cache_pages+0x2bb/0x440 [btrfs]\n  [72747.567106]  ? update_load_avg+0x7e/0x5f0\n  [72747.567109]  ? enqueue_entity+0xf4/0x6f0\n  [72747.567134]  extent_writepages+0x44/0xa0 [btrfs]\n  [72747.567137]  ? enqueue_task_fair+0x93/0x6f0\n  [72747.567140]  do_writepages+0x41/0xd0\n  [72747.567144]  __filemap_fdatawrite_range+0xc7/0x100\n  [72747.567167]  btrfs_run_delalloc_work+0x17/0x40 [btrfs]\n  [72747.567195]  btrfs_work_helper+0xc2/0x300 [btrfs]\n  [72747.567200]  process_one_work+0x1aa/0x340\n  [72747.567202]  worker_thread+0x30/0x390\n  [72747.567205]  ? create_worker+0x1a0/0x1a0\n  [72747.567208]  kthread+0x116/0x130\n  [72747.567211]  ? kthread_park+0x80/0x80\n  [72747.567214]  ret_from_fork+0x1f/0x30\n\n  [72747.569686] task:fsstress        state:D stack:    \n---truncated---"
          }
        ],
        "id": "CVE-2021-46987",
        "lastModified": "2024-02-28T14:06:45.783",
        "metrics": {},
        "published": "2024-02-28T09:15:37.583",
        "references": [
          {
            "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
            "url": "https://git.kernel.org/stable/c/96157707c0420e3d3edfe046f1cc797fee117ade"
          },
          {
            "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
            "url": "https://git.kernel.org/stable/c/d5347827d0b4b2250cbce6eccaa1c81dc78d8651"
          },
          {
            "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
            "url": "https://git.kernel.org/stable/c/f9baa501b4fd6962257853d46ddffbc21f27e344"
          }
        ],
        "sourceIdentifier": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
        "vulnStatus": "Awaiting Analysis"
      }
    }
  }
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.


Loading…

Loading…