📦 Bump versions of multiple dependencies to address vulnerabilities #30
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Lineaje has automatically created this pull request to resolve the following CVEs:
PackageIndexwas fixed in setuptools version 78.1.1 ### Details
def<br>_download_url(self, url, tmpdir): # Determine download<br>filename # name, _fragment = egg_info_for_url(url) if name:<br>while '..' in name: name = name.replace('..',<br>'.').replace('\\', '_') else: name = "__downloaded__" #<br>default if URL has no path contents if<br>name.endswith('.[egg.zip](http://egg.zip/)'): name =<br>name[:-4] # strip the extra .zip before download --> filename<br>= os.path.join(tmpdir, name)Here:https://github.com/pypa/setuptools/blob/6ead555c5fb29bc57fe6105b1bffc163f56fd558/setuptools/package_index.py#L810C1-L825C88
os.path.join()discards the first argumenttmpdirif thesecond begins with a slash or drive letter.
nameis derivedfrom a URL without sufficient sanitization. While there is
some attempt to sanitize by replacing instances of '..' with
'.', it is insufficient. ### Risk Assessment As easy_install
and package_index are deprecated, the exploitation surface is
reduced. However, it seems this could be exploited in a
similar fashion like
GHSA-r9hx-vwmv-q579, and as
described by POC 4 in
GHSA-cx63-2mw6-8hw5 report: via
malicious URLs present on the pages of a package index. ###
Impact An attacker would be allowed to write files to
arbitrary locations on the filesystem with the permissions of
the process running the Python code, which could escalate to
RCE depending on the context. ### References
https://huntr.com/bounties/d6362117-ad57-4e83-951f-b8141c6e7ca5
pypa/setuptools#4946
PackageIndexwas fixed in setuptools version 78.1.1 ### Details
def<br>_download_url(self, url, tmpdir): # Determine download<br>filename # name, _fragment = egg_info_for_url(url) if name:<br>while '..' in name: name = name.replace('..',<br>'.').replace('\\', '_') else: name = "__downloaded__" #<br>default if URL has no path contents if<br>name.endswith('.[egg.zip](http://egg.zip/)'): name =<br>name[:-4] # strip the extra .zip before download --> filename<br>= os.path.join(tmpdir, name)Here:https://github.com/pypa/setuptools/blob/6ead555c5fb29bc57fe6105b1bffc163f56fd558/setuptools/package_index.py#L810C1-L825C88
os.path.join()discards the first argumenttmpdirif thesecond begins with a slash or drive letter.
nameis derivedfrom a URL without sufficient sanitization. While there is
some attempt to sanitize by replacing instances of '..' with
'.', it is insufficient. ### Risk Assessment As easy_install
and package_index are deprecated, the exploitation surface is
reduced. However, it seems this could be exploited in a
similar fashion like
GHSA-r9hx-vwmv-q579, and as
described by POC 4 in
GHSA-cx63-2mw6-8hw5 report: via
malicious URLs present on the pages of a package index. ###
Impact An attacker would be allowed to write files to
arbitrary locations on the filesystem with the permissions of
the process running the Python code, which could escalate to
RCE depending on the context. ### References
https://huntr.com/bounties/d6362117-ad57-4e83-951f-b8141c6e7ca5
pypa/setuptools#4946
package_indexmodule ofpypa/setuptools versions up to 69.1.1 allows for remote code
execution via its download functions. These functions, which
are used to download packages from URLs provided by users or
retrieved from package index servers, are susceptible to code
injection. If these functions are exposed to user-controlled
inputs, such as package URLs, they can execute arbitrary
commands on the system. The issue is fixed in version 70.0.
backend to parse untrusted Protocol Buffers data containing
an arbitrary number of recursive groups, recursive
messages or a series of
SGROUPtags can be corrupted by exceeding the Python recursion
limit. Reporter: Alexis Challande, Trail of Bits Ecosystem
Security Team
[email protected]
Affected versions: This issue only affects the pure-Python
implementation
of protobuf-python backend. This is the implementation when
PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=pythonenvironmentvariable is set or the default when protobuf is used from
Bazel or pure-Python PyPi wheels. CPython PyPi wheels do not
use pure-Python by default. This is a Python variant of a
previous issue affecting
protobuf-java.
### Severity This is a potential Denial of Service. Parsing
nested protobuf data creates unbounded recursions that can be
abused by an attacker. ### Proof of Concept For reproduction
details, please refer to the unit tests
decoder_test.py
and
message_test
### Remediation and Mitigation A mitigation is available now.
Please update to the latest available versions of the
following packages: * protobuf-python(4.25.8, 5.29.5, 6.31.1)
an input file and it has deeply nested data, Jackson could
end up throwing a StackoverflowError if the depth is
particularly large. ### Patches jackson-core 2.15.0 contains
a configurable limit for how deep Jackson will traverse in an
input document, defaulting to an allowable depth of 1000.
Change is in
FasterXML/jackson-core#943.
jackson-core will throw a StreamConstraintsException if the
limit is reached. jackson-databind also benefits from this
change because it uses jackson-core to parse JSON inputs. ###
Workarounds Users should avoid parsing input files from
untrusted sources.
JsonLocation._appendSourceDescmethod allows up to 500bytes of unintended memory content to be included in
exception messages. When parsing JSON from a byte array with
an offset and length, the exception message incorrectly reads
from the beginning of the array instead of the logical
payload start. This results in possible information
disclosure in systems using pooled or reused buffers,
like Netty or Vert.x. ### Details The vulnerability affects
the creation of exception messages like:
<br>JsonParseException: Unexpected character ... at [Source:<br>(byte[])...]WhenJsonFactory.createParser(byte[] data,<br>int offset, int len)is used, and an error occurs whileparsing, the exception message should include a snippet from
the specified logical payload. However, the method
_appendSourceDescignores theoffset, and always startsreading from index
0. If the buffer contains residualsensitive data from a previous request, such as credentials
or document contents, that data may be exposed if the
exception is propagated to the client. The issue particularly
impacts server applications using: * Pooled byte buffers
(e.g., Netty) * Frameworks that surface parse errors in HTTP
responses * Default Jackson settings (i.e.,
INCLUDE_SOURCE_IN_LOCATIONis enabled) A documentedreal-world example is
CVE-2021-22145
in Elasticsearch, which stemmed from the same root cause. ###
Attack Scenario An attacker sends malformed JSON to a service
using Jackson and pooled byte buffers (e.g., Netty-based HTTP
servers). If the server reuses a buffer and includes the
parser’s exception in its HTTP 400 response, the attacker
may receive residual data from previous requests. ### Proof
of Concept
java byte[] buffer = new byte[1000];<br>System.arraycopy("SECRET".getBytes(), 0, buffer, 0, 6);<br>System.arraycopy("{ \"bad\": }".getBytes(), 0, buffer, 700,<br>10); JsonFactory factory = new JsonFactory(); JsonParser<br>parser = factory.createParser(buffer, 700, 20);<br>parser.nextToken(); // throws exception // Exception message<br>will include "SECRET"### Patches This issue was silentlyfixed in jackson-core version 2.13.0, released on September
30, 2021, via PR
#652.
All users should upgrade to version 2.13.0 or later. ###
Workarounds If upgrading is not immediately possible,
applications can mitigate the issue by: 1. Disabling
exception message exposure to clients — avoid returning
parsing exception messages in HTTP responses. 2. Disabling
source inclusion in exceptions by setting:
java<br>jsonFactory.disable(JsonFactory.Feature.INCLUDE_SOURCE_IN_LOCATION);<br>This prevents Jackson from embedding any source contentin exception messages, avoiding leakage. ### References
Pull Request #652 (Fix
implementation)
CVE-2021-22145 (Elasticsearch exposure of this
flaw)
chunked-coding message bodies can lead to request smuggling
vulnerabilities under certain conditions. ### Details
HTTP/1.1 Chunked-Encoding bodies are formatted as a sequence
of "chunks", each of which consists of: - chunk length -
\r\n-lengthbytes of content -\r\nIn versions ofh11 up to 0.14.0, h11 instead parsed them as: - chunk length
-
\r\n-lengthbytes of content - any two bytes i.e. itdid not validate that the trailing
\r\nbytes were correct,and if you put 2 bytes of garbage there it would be accepted,
instead of correctly rejecting the body as malformed. By
itself this is harmless. However, suppose you have a proxy or
reverse-proxy that tries to analyze HTTP requests, and your
proxy has a different bug in parsing Chunked-Encoding,
acting as if the format is: - chunk length -
\r\n-lengthbytes of content - more bytes of content, as many asit takes until you find a
\r\nFor example,pound had this
bug -- it can happen if an implementer uses a generic "read
until end of line" helper to consumes the trailing
\r\n. Inthis case, h11 and your proxy may both accept the same stream
of bytes, but interpret them differently. For example,
consider the following HTTP request(s) (assume all line
breaks are
\r\n):GET /one HTTP/1.1 Host: localhost<br>Transfer-Encoding: chunked 5 AAAAAXX2 45 0 GET /two HTTP/1.1<br>Host: localhost Transfer-Encoding: chunked 0Here h11will interpret it as two requests, one with body
AAAAA45and one with an empty body, while our hypothetical buggy
proxy will interpret it as a single request, with body
AAAAXX20\r\n\r\nGET /two .... And any time two HTTPprocessors both accept the same string of bytes but interpret
them differently, you have the conditions for a "request
smuggling" attack. For example, if
/twois a dangerousendpoint and the job of the reverse proxy is to stop requests
from getting there, then an attacker could use a bytestream
like the above to circumvent this protection. Even worse, if
our buggy reverse proxy receives two requests from different
users:
GET /one HTTP/1.1 Host: localhost<br>Transfer-Encoding: chunked 5 AAAAAXX999 0GET /two<br>HTTP/1.1 Host: localhost Cookie: SESSION_KEY=abcdef......it will consider the first request to be complete and
valid, and send both on to the h11-based web server over the
same socket. The server will then see the two concatenated
requests, and interpret them as one request to
/onewhosebody includes
/two's session key, potentially allowing oneuser to steal another's credentials. ### Patches Fixed in h11
0.15.0. ### Workarounds Since exploitation requires the
combination of buggy h11 with a buggy (reverse) proxy, fixing
either component is sufficient to mitigate this issue. ###
Credits Reported by Jeppe Bonde Weikop on 2025-01-09.
HTTP Range header that triggers quadratic-time processing in
Starlette's
FileResponseRange parsing/merging logic. Thisenables CPU exhaustion per request, causing
denial‑of‑service for endpoints serving files (e.g.,
StaticFilesor any use ofFileResponse). ### DetailsStarlette parses multi-range requests in
FileResponse._parse_range_header(), then merges rangesusing an O(n^2) algorithm.
python # starlette/responses.py<br>_RANGE_PATTERN = re.compile(r"(\d*)-(\d*)") # vulnerable to<br>O(n^2) complexity ReDoS class FileResponse(Response):<br>@staticmethod def _parse_range_header(http_range: str,<br>file_size: int) -> list[tuple[int, int]]: ranges:<br>list[tuple[int, int]] = [] try: units, range_ =<br>http_range.split("=", 1) except ValueError: raise<br>MalformedRangeHeader() # [...] ranges = [ ( int(_[0]) if _[0]<br>else file_size - int(_[1]), int(_[1]) + 1 if _[0] and _[1]<br>and int(_[1]) < file_size else file_size, ) for _ in<br>_RANGE_PATTERN.findall(range_) # vulnerable if _ != ("", "")<br>]The parsing loop ofFileResponse._parse_range_header()uses the regularexpression which vulnerable to denial of service for its
O(n^2) complexity. A crafted
Rangeheader can maximize itscomplexity. The merge loop processes each input range by
scanning the entire result list, yielding quadratic behavior
with many disjoint ranges. A crafted Range header with many
small, non-overlapping ranges (or specially shaped numeric
substrings) maximizes comparisons. This affects any Starlette
application that uses: -
starlette.staticfiles.StaticFiles(internally returnsFileResponse) —starlette/staticfiles.py:178- Directstarlette.responses.FileResponseresponses ### PoCpython #!/usr/bin/env python3 import sys import time try:<br>import starlette from starlette.responses import FileResponse<br>except Exception as e: print(f"[ERROR] Failed to import<br>starlette: {e}") sys.exit(1) def build_payload(length: int)<br>-> str: """Build the Range header value body: '0' * num_zeros<br>+ '0-'""" return ("0" * length) + "a-" def test(header: str,<br>file_size: int) -> float: start = time.perf_counter() try:<br>FileResponse._parse_range_header(header, file_size) except<br>Exception: pass end = time.perf_counter() elapsed = end -<br>start return elapsed def run_once(num_zeros: int) -> None:<br>range_body = build_payload(num_zeros) header = "bytes=" +<br>range_body # Use a sufficiently large file_size so upper<br>bounds default to file size file_size = max(len(range_body) +<br>10, 1_000_000) print(f"[DEBUG] range_body length:<br>{len(range_body)} bytes") elapsed_time = test(header,<br>file_size) print(f"[DEBUG] elapsed time: {elapsed_time:.6f}<br>seconds\n") if __name__ == "__main__": print(f"[INFO]<br>Starlette Version: {starlette.__version__}") for n in [5000,<br>10000, 20000, 40000]: run_once(n) """ $ python3<br>poc_dos_range.py [INFO] Starlette Version: 0.48.0 [DEBUG]<br>range_body length: 5002 bytes [DEBUG] elapsed time: 0.053932<br>seconds [DEBUG] range_body length: 10002 bytes [DEBUG]<br>elapsed time: 0.209770 seconds [DEBUG] range_body length:<br>20002 bytes [DEBUG] elapsed time: 0.885296 seconds [DEBUG]<br>range_body length: 40002 bytes [DEBUG] elapsed time: 3.238832<br>seconds """### Impact Any Starlette app serving filesvia FileResponse or StaticFiles; frameworks built on
Starlette (e.g., FastAPI) are indirectly impacted when using
file-serving endpoints. Unauthenticated remote attackers can
exploit this via a single HTTP request with a crafted Range
header.
(greater than the default max spool
size)
starlettewill block the main thread to roll the file overto disk. This blocks the event thread which means we can't
accept new connections. ### Details Please see this
discussion for details:
Kludex/starlette#2927 (reply in thread).
In summary the following UploadFile code (copied from
here)
has a minor bug. Instead of just checking for
self._in_memorywe should also check if the additionalbytes will cause a rollover.
python @property def<br>_in_memory(self) -> bool: # check for<br>SpooledTemporaryFile._rolled rolled_to_disk =<br>getattr(self.file, "_rolled", True) return not rolled_to_disk<br>async def write(self, data: bytes) -> None: if self.size is<br>not None: self.size += len(data) if self._in_memory:<br>self.file.write(data) else: await<br>run_in_threadpool(self.file.write, data)I have alreadycreated a PR which fixes the problem:
Kludex/starlette#2962 ### PoC See the
discussion
here
for steps on how to reproduce. ### Impact To be honest, very
low and not many users will be impacted. Parsing large forms
is already CPU intensive so the additional IO block doesn't
slow down
starlettethat much on systems with modernHDDs/SSDs. If someone is running on tape they might see a
greater impact.
Connector
is a pure Python connector to Redshift (i.e., driver) that
implements the Python Database API Specification
2.0. When the
Amazon Redshift Python Connector is configured with the
BrowserAzureOAuth2CredentialsProvider plugin, the driver
skips the SSL certificate validation step for the Identity
Provider. ### Impact An insecure connection could allow an
actor to intercept the token exchange process and retrieve an
access token. Impacted versions: >=2.0.872;<=2.1.6 ###
Patches Upgrade Amazon Redshift Python Connector to version
2.1.7 and ensure any forked or derivative code is patched to
incorporate the new fixes. ### Workarounds None ###
References If you have any questions or comments about this
advisory we ask that you contact AWS/Amazon Security via our
vulnerability reporting page [1] or directly via email to
[email protected].
Please do not create a public GitHub issue. [1] Vulnerability
reporting page:
https://aws.amazon.com/security/vulnerability-reporting
in the Snowflake Connector for Python. The OCSP response
cache uses pickle as the serialization format, potentially
leading to local privilege escalation. This vulnerability
affects versions 2.7.12 through 3.13.0. Snowflake fixed the
issue in version 3.13.1. ### Vulnerability Details The OCSP
response cache is saved locally on the machine running the
Connector using the pickle serialization format. This can
potentially lead to local privilege escalation if an attacker
has write access to the OCSP response cache file. ###
Solution Snowflake released version 3.13.1 of the Snowflake
Connector for Python, which fixes this issue. We recommend
users upgrade to version 3.13.1. ### Additional Information
If you discover a security vulnerability in one of our
products or websites, please report the issue to HackerOne.
For more information, please see our Vulnerability
Disclosure
Policy.
in the Snowflake Connector for Python. A function from the
snowflake.connector.pandas_tools module is vulnerable to SQL
injection. This vulnerability affects versions 2.2.5 through
3.13.0. Snowflake fixed the issue in version 3.13.1. ###
Vulnerability Details A function from the
snowflake.connector.pandas_tools module is not sanitizing all
of its arguments, and queries using them are not
parametrized. An attacker controlling these arguments could
achieve SQL injection by passing crafted input. Any SQL
executed that way by an attacker would still run in the
context of the current session. ### Solution Snowflake
released version 3.13.1 of the Snowflake Connector for
Python, which fixes this issue. We recommend users upgrade to
version 3.13.1. ### Additional Information If you discover a
security vulnerability in one of our products or websites,
please report the issue to HackerOne. For more information,
please see our Vulnerability Disclosure
Policy.
in the Snowflake Connector for Python. On Linux systems, when
temporary credential caching is enabled, the Snowflake
Connector for Python will cache temporary credentials locally
in a world-readable file. This vulnerability affects versions
2.3.7 through 3.13.0. Snowflake fixed the issue in version
3.13.1. ### Vulnerability Details On Linux, when either
EXTERNALBROWSER or USERNAME_PASSWORD_MFA authentication
methods are used with temporary credential caching enabled,
the Snowflake Connector for Python will cache the temporary
credentials in a local file. In the vulnerable versions of
the Driver, this file is created with world-readable
permissions. ### Solution Snowflake released version 3.13.1
of the Snowflake Connector for Python, which fixes this
issue. We recommend users upgrade to version 3.13.1. ###
Additional Information If you discover a security
vulnerability in one of our products or websites, please
report the issue to HackerOne. For more information, please
see our Vulnerability Disclosure
Policy.
You can merge this PR once the tests pass and the changes are reviewed.
Thank you for reviewing the update! 🚀