Skip to content

Commit cc1e433

Browse files
1 parent ad30cd2 commit cc1e433

File tree

4 files changed

+16
-8
lines changed

4 files changed

+16
-8
lines changed

advisories/github-reviewed/2026/04/GHSA-68jq-c3rv-pcrr/GHSA-68jq-c3rv-pcrr.json

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,11 @@
11
{
22
"schema_version": "1.4.0",
33
"id": "GHSA-68jq-c3rv-pcrr",
4-
"modified": "2026-04-14T01:05:05Z",
4+
"modified": "2026-04-15T21:00:46Z",
55
"published": "2026-04-14T01:05:05Z",
6-
"aliases": [],
6+
"aliases": [
7+
"CVE-2026-40476"
8+
],
79
"summary": "graphql-php is affected by a Denial of Service via quadratic complexity in OverlappingFieldsCanBeMerged validation",
810
"details": "The `OverlappingFieldsCanBeMerged` validation rule exhibits quadratic time complexity when processing queries with many repeated fields sharing the same response name. An attacker can send a crafted query like `{ hello hello hello ... }` with thousands of repeated fields, causing excessive CPU usage during validation before execution begins.\n\nThis is not mitigated by existing QueryDepth or QueryComplexity rules.\n\n**Observed impact (tested on v15.31.4):**\n- 1000 fields: ~0.6s\n- 2000 fields: ~2.4s\n- 3000 fields: ~5.3s\n- 5000 fields: request timeout (>20s)\n\n**Root cause:** `collectConflictsWithin()` performs O(n²) pairwise comparisons of all fields with the same response name. For identical repeated fields, every comparison returns \"no conflict\" but the quadratic iteration count causes resource exhaustion.\n\n**Fix:** Deduplicate structurally identical fields before pairwise comparison, reducing the complexity from O(n²) to O(u²) where u is the number of unique field signatures (typically 1 for this attack pattern).\n\n**Credit:** Ashwak N (ashwakn04@gmail.com)",
911
"severity": [

advisories/github-reviewed/2026/04/GHSA-76hw-p97h-883f/GHSA-76hw-p97h-883f.json

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,11 @@
11
{
22
"schema_version": "1.4.0",
33
"id": "GHSA-76hw-p97h-883f",
4-
"modified": "2026-04-14T01:11:30Z",
4+
"modified": "2026-04-15T21:00:58Z",
55
"published": "2026-04-14T01:11:30Z",
6-
"aliases": [],
6+
"aliases": [
7+
"CVE-2026-40491"
8+
],
79
"summary": "gdown Affected by Arbitrary File Write via Path Traversal in gdown.extractall",
810
"details": "### Summary\nThe gdown library (tested on v5.2.1) is vulnerable to a Path Traversal attack within its extractall functionality. When extracting a maliciously crafted ZIP or TAR archive, the library fails to sanitize or validate the filenames of the archive members. This allow files to be written outside the intended destination directory, potentially leading to arbitrary file overwrite and Remote Code Execution (RCE).\n\n### Details\nThe vulnerability exists in `gdown/extractall.py` within the `extractall()` function. The function takes an archive path and a destination directory (`to`), then calls the underlying `extractall()` method of Python's `tarfile` or `zipfile` modules without validating whether the archive members stay within the `to` boundary.\n\nVulnerable Code:\n```\n# gdown/extractall.py\ndef extractall(path, to=None):\n # ... (omitted) ...\n with opener(path, mode) as f:\n f.extractall(path=to) # Vulnerable: No path validation or filters`\n```\nEven on modern Python versions (3.12+), if the `filter` parameter is not explicitly set or if the library's wrapper logic bypasses modern protections, path traversal remains possible as demonstrated in the PoC.\n\n\n### PoC\n## Steps to Reproduce\n\n1. Create the Malicious Archive (`poc.py`):\n```\nimport tarfile\nimport io\nimport os\n\n# Create a target directory\nos.makedirs(\"./safe_target/subfolder\", exist_ok=True)\n\n# Generate a TAR file containing a member with path traversal\nwith tarfile.open(\"evil.tar\", \"w\") as tar:\n # Target: escape the subfolder and write to the parent 'safe_target'\n payload = tarfile.TarInfo(name=\"../escape.txt\")\n content = b\"Path Traversal Success!\"\n payload.size = len(content)\n tar.addfile(payload, io.BytesIO(content))\n\nprint(\"[+] evil.tar created.\")`\n```\n1. Execute the Vulnerable Function:\n```\n`python3 -c \"from gdown import extractall; extractall('evil.tar', to='./safe_target/subfolder')\"`\n```\n1. Verify the Escape:\n```\nls -l ./safe_target/escape.txt\n# Output: -rw-r--r-- 1 user user 23 Mar 15 2026 ./safe_target/escape.txt`\n```\n\n### Impact\nAn attacker can provide a specially crafted archive that, when extracted via `gdown`, overwrites critical files on the victim's system.\n\n- Arbitrary File Overwrite: Overwriting `.bashrc`, `.ssh/authorized_keys`, or configuration files.\n- Remote Code Execution (RCE): By overwriting executable scripts or Python modules within a virtual environment.\n\n\n### Recommended Mitigation \nmplement path validation to ensure that all extracted files are contained within the target directory.\n\n**Suggested Fix:**\n\n```\nimport os\n\ndef is_within_directory(directory, target):\n abs_directory = os.path.abspath(directory)\n abs_target = os.path.abspath(target)\n prefix = os.path.commonpath([abs_directory])\n return os.path.commonpath([abs_directory, abs_target]) == prefix\n\n# Inside [extractall.py](http://extractall.py/)\nwith opener(path, mode) as f:\n if isinstance(f, tarfile.TarFile):\n for member in f.getmembers():\n member_path = os.path.join(to, [member.name](http://member.name/))\n if not is_within_directory(to, member_path):\n raise Exception(\"Attempted Path Traversal in Tar File\")\n f.extractall(path=to)\n```",
911
"severity": [

advisories/github-reviewed/2026/04/GHSA-cmxv-58fp-fm3g/GHSA-cmxv-58fp-fm3g.json

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,11 @@
11
{
22
"schema_version": "1.4.0",
33
"id": "GHSA-cmxv-58fp-fm3g",
4-
"modified": "2026-04-14T01:07:42Z",
4+
"modified": "2026-04-15T21:00:53Z",
55
"published": "2026-04-14T01:07:42Z",
6-
"aliases": [],
6+
"aliases": [
7+
"CVE-2026-40490"
8+
],
79
"summary": "AsyncHttpClient leaks authorization credentialsto untrusted domains on cross-origin redirects",
810
"details": "### Impact\nWhen redirect following is enabled (followRedirect(true)), AsyncHttpClient forwards Authorization and Proxy-Authorization headers along with Realm credentials to arbitrary redirect targets regardless of domain, scheme, or port changes. This leaks credentials on cross-domain redirects and HTTPS-to-HTTP downgrades.\n\nAdditionally, even when stripAuthorizationOnRedirect is set to true, the Realm object containing plaintext credentials is still propagated to the redirect request, causing credential re-generation for Basic and Digest authentication schemes via NettyRequestFactory.\n\nAn attacker who controls a redirect target (via open redirect, DNS rebinding, or MITM on HTTP) can capture Bearer tokens, Basic auth credentials, or any other Authorization header value.\n\n### Patches\nFixed in version 3.0.9. Users should upgrade immediately.\n\nThe fix automatically strips Authorization and Proxy-Authorization headers and clears Realm credentials whenever a redirect crosses origin boundaries (different scheme, host, or port) or downgrades from HTTPS to HTTP.\n\n### Workarounds\nFor users unable to upgrade, set (stripAuthorizationOnRedirect(true)) in the client config and avoid using Realm-based authentication with redirect following enabled. Note that (stripAuthorizationOnRedirect(true)) alone is insufficient on versions prior to 3.0.9 because the Realm bypass still re-generates credentials.\n\nAlternatively, disable redirect following (followRedirect(false)) and handle redirects manually with origin validation.",
911
"severity": [

advisories/github-reviewed/2026/04/GHSA-v7xq-3wx6-fqc2/GHSA-v7xq-3wx6-fqc2.json

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,11 @@
11
{
22
"schema_version": "1.4.0",
33
"id": "GHSA-v7xq-3wx6-fqc2",
4-
"modified": "2026-04-14T00:03:36Z",
4+
"modified": "2026-04-15T21:00:49Z",
55
"published": "2026-04-14T00:03:36Z",
6-
"aliases": [],
6+
"aliases": [
7+
"CVE-2026-40481"
8+
],
79
"summary": "In monetr, unauthenticated Stripe webhook reads attacker-sized request bodies before signature validation",
810
"details": "### Summary\n\nThe public Stripe webhook endpoint fully reads the request body into memory before validating the Stripe signature. A remote unauthenticated attacker can send oversized POST bodies and cause substantial memory growth, leading to denial of service.\n\n### Details\n\nWhen Stripe webhooks are enabled, the Stripe webhook route is reachable without authentication. The handler only requires that a `Stripe-Signature` header be present, then buffers the entire request body in memory and only afterward attempts Stripe signature verification.\n\nBecause body buffering happens before signature validation, memory consumption is controlled by the attacker-supplied payload size even when the signature is invalid. Large requests or repeated requests can exhaust available memory and make the service unresponsive or crash.\n\nThis issue depends on Stripe webhooks being enabled. If an upstream proxy or load balancer already enforces a strict request-body limit smaller than the attacker payload, exploitability is reduced accordingly.\n\n### PoC\n\n```bash\nURL=\"http://127.0.0.1:4000/api/stripe/webhook\"\nPROC_NAME=\"monetr\"\nTOTAL_KIB=\"$(awk '/MemTotal:/ {print $2}' /proc/meminfo)\"\n\npython3 - <<'PY' | curl -s -o /dev/null \\\n --limit-rate 10m \\\n -H 'Stripe-Signature: t=1,v1=deadbeef' \\\n --data-binary @- \\\n \"$URL\" &\nimport sys\nsys.stdout.buffer.write(b\"A\" * (256 * 1024 * 1024))\nPY\nREQ_PID=$!\n\nwhile kill -0 \"$REQ_PID\" 2>/dev/null; do\n ps -C \"$PROC_NAME\" -o rss=,%cpu= | awk -v total=\"$TOTAL_KIB\" '\n {\n printf \"%s mem=%.2fMiB / %.3fGiB cpu=%s%%\\n\", \"'\"$PROC_NAME\"'\", $1/1024, total/1024/1024, $2\n }\n '\n sleep 1\ndone\n\nwait \"$REQ_PID\"\n# monetr mem rises substantially while processing the invalid webhook body before signature validation fails\n```\n\n### Impact\n\n- Type: Denial of service / uncontrolled resource consumption (CWE-400)\n- Who is impacted: Internet-reachable monetr deployments that have both Stripe billing and Stripe webhooks enabled\n (Stripe.Enabled and Stripe.WebhooksEnabled). In practice this is the hosted/SaaS configuration. Self-hosted instances\n are very unlikely to be affected, because Stripe billing is opt-in, is not part of a typical self-hosted setup, and\n the webhook route short-circuits to 404 when it is not enabled; meaning the unbounded read is unreachable on a default\n self-hosted deployment.\n- Security impact: A remote, unauthenticated attacker can cause the monetr server process to buffer attacker-controlled\n payloads into memory before any signature validation occurs. Sufficiently large or repeated requests can drive memory\n consumption high enough to make the API unresponsive or crash the process, denying service to all legitimate users of\n the affected instance — not just users of the billing surface.\n- Attack preconditions: The attacker must be able to reach the /api/stripe/webhook endpoint over the network and the\n target instance must have Stripe webhooks enabled. No authentication, prior account, user interaction, or knowledge of\n the Stripe webhook secret is required. Exploitability is reduced (and may be effectively eliminated) on deployments\n where an upstream proxy or load balancer enforces a request-body size limit smaller than the attacker's payload.",
911
"severity": [

0 commit comments

Comments
 (0)