Issue: Add Response Compression Middleware#244
Conversation
|
@robertocarlous is attempting to deploy a commit to the olufunbiik's projects Team on Vercel. A member of the Team first needs to authorize it. |
📝 WalkthroughWalkthroughAdded response compression middleware to the NestJS backend supporting gzip and brotli encoding with configurable 1KB threshold and binary endpoint exclusion. Implemented middleware registration in app bootstrap and middleware consumer, updated dependencies, added integration tests validating compression effectiveness, and documented measurement results. Changes
Sequence DiagramsequenceDiagram
participant Client
participant NestApp as NestJS App
participant VaryMW as VaryAcceptEncodingMiddleware
participant CompressionMW as CompressionMiddleware
participant Handler as Route Handler
participant Network
Client->>NestApp: GET /api/endpoint<br/>Accept-Encoding: br, gzip
NestApp->>VaryMW: Invoke middleware
VaryMW->>VaryMW: Set Vary: Accept-Encoding
VaryMW->>CompressionMW: next()
CompressionMW->>CompressionMW: shouldCompress(req, res)
alt Binary Endpoint (e.g., /files/upload)
CompressionMW->>Handler: Skip compression filter
else JSON Endpoint
CompressionMW->>Handler: Apply compression filter
end
Handler->>Handler: Generate response
Handler->>CompressionMW: Response stream
alt Response ≥ 1KB & not binary
CompressionMW->>CompressionMW: Select Brotli (preferred)<br/>or Gzip (fallback)
CompressionMW->>CompressionMW: Compress payload
else Response < 1KB or binary
CompressionMW->>CompressionMW: Skip compression
end
CompressionMW->>Network: Content-Encoding: br/gzip<br/>Vary: Accept-Encoding
Network->>Client: Compressed response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@OlufunbiIK pls review |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/test-compression-config.js (1)
46-56:⚠️ Potential issue | 🟡 MinorMiddleware positioning check uses outdated pattern.
Line 48 searches for
app.use(compression(but the actual registration inmain.tsusesapp.use(createCompressionMiddleware()). This will causecompressionIndexto be-1, makingcorrectPositioningalwaysfalseeven when the middleware is correctly positioned.🐛 Proposed fix to use correct pattern
// Check 4: Middleware positioning (after CORS, before routes) const corsIndex = mainTsContent.indexOf('enableCors'); -const compressionIndex = mainTsContent.indexOf('app.use(compression('); +const compressionIndex = mainTsContent.indexOf('app.use(createCompressionMiddleware()'); const routesIndex = mainTsContent.indexOf('setGlobalPrefix');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/test-compression-config.js` around lines 46 - 56, The middleware positioning check fails because compressionIndex searches for "app.use(compression(" but the app actually registers compression as "app.use(createCompressionMiddleware())"; update the search used to compute compressionIndex in backend/test-compression-config.js to look for the actual registration string (e.g., "app.use(createCompressionMiddleware())" or a more robust substring like "createCompressionMiddleware") so compressionIndex reflects the real position in mainTsContent, leaving corsIndex, routesIndex and the correctPositioning logic unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/src/common/middleware/response-compression.middleware.ts`:
- Around line 10-14: BINARY_ENDPOINT_RULES currently only matches paths like
/files/... but the app uses a global prefix (setGlobalPrefix('api')) so requests
are /api/files/..., causing misclassification; update the regex patterns in
BINARY_ENDPOINT_RULES (the three entries) to include an optional leading "/api"
and optional version segment (e.g. ^\/api(?:\/v\d+)?\/files\/upload\/?$ etc.) so
the POST/GET rules for upload, stream and file lookup match the actual request
paths handled by the compression middleware.
In `@backend/test/compression-integration.e2e-spec.ts`:
- Around line 52-64: The test's binary routes (uploadFile, downloadFile,
streamFile and the other handlers around lines 144-162) return tiny text-like
Buffers so compression remains off due to the 1KB threshold and doesn't validate
the binary-route exclusion; change those handlers to return >1KB of non-text
binary data (e.g., a randomized or repeated byte pattern Buffer) so responses
are genuinely binary and exceed the compression threshold, then rerun the spec
to ensure content-encoding remains empty for these endpoints.
- Line 5: The test currently mounts createCompressionMiddleware() directly and
only exercises the helper, so update the e2e spec to start the real application
bootstrap (the actual app/module used in production) for at least one test path
so compression wiring in backend/src/main.ts or AppModule is validated;
specifically replace the controller-only app setup with the production bootstrap
(or import and call the existing bootstrap/createApp function) for one of the
test cases (also update the tests around lines 97-104 mentioned in the comment)
so requests flow through the real module wiring and middleware instead of
directly injecting createCompressionMiddleware().
- Around line 165-184: The test currently computes compression ratios by
compressing the local JSON payload with zlib instead of measuring the server's
actual responses; change the loop over endpointCases to perform three real
requests per endpoint (Accept-Encoding: identity, br, gzip), capture the raw
response bytes of each server response (ensure you fetch raw/buffered bodies,
not parsed JSON), compute sizes from those response buffers and then compute
brotliRatio and gzipRatio as 1 - compressedSize / originalSize using the
server-returned 'br' and 'gzip' response bodies, and assert those ratios are >=
0.4; update references around the test name and variables endpointCases,
baselineResponse, brotliSize, gzipSize, brotliRatio and gzipRatio accordingly.
---
Outside diff comments:
In `@backend/test-compression-config.js`:
- Around line 46-56: The middleware positioning check fails because
compressionIndex searches for "app.use(compression(" but the app actually
registers compression as "app.use(createCompressionMiddleware())"; update the
search used to compute compressionIndex in backend/test-compression-config.js to
look for the actual registration string (e.g.,
"app.use(createCompressionMiddleware())" or a more robust substring like
"createCompressionMiddleware") so compressionIndex reflects the real position in
mainTsContent, leaving corsIndex, routesIndex and the correctPositioning logic
unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 949db7e5-bbde-4b3b-abdc-c605d94ec45c
⛔ Files ignored due to path filters (1)
backend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (9)
backend/COMPRESSION_MEASUREMENTS.mdbackend/package.jsonbackend/src/app.module.tsbackend/src/common/middleware/response-compression.middleware.tsbackend/src/common/middleware/vary-accept-encoding.middleware.tsbackend/src/main.tsbackend/test-compression-config.jsbackend/test/compression-integration.e2e-spec.tsbackend/test/compression-vary-header.e2e-spec.ts
| const BINARY_ENDPOINT_RULES: BinaryEndpointRule[] = [ | ||
| { method: 'POST', pattern: /\/files\/upload\/?$/ }, | ||
| { method: 'GET', pattern: /\/files\/[^/]+\/stream\/?$/ }, | ||
| { method: 'GET', pattern: /\/files\/[^/]+\/?$/ }, | ||
| ]; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the global prefix and versioning configuration in main.ts
echo "=== Checking setGlobalPrefix usage ==="
rg -n "setGlobalPrefix" backend/src/main.ts -A 5
echo ""
echo "=== Checking enableVersioning usage ==="
rg -n "enableVersioning" backend/src/main.ts -A 5
echo ""
echo "=== Checking StorageController route definition ==="
rg -n "@Controller" backend/src/storage/storage.controller.ts -A 2Repository: OlufunbiIK/tip-tune
Length of output: 696
🏁 Script executed:
cat -n backend/src/common/middleware/response-compression.middleware.tsRepository: OlufunbiIK/tip-tune
Length of output: 1515
🏁 Script executed:
rg -n "createCompressionMiddleware\|compression" backend/src/main.ts -B 3 -A 3Repository: OlufunbiIK/tip-tune
Length of output: 45
🏁 Script executed:
rg -n "compression" backend/src -A 2 -B 2Repository: OlufunbiIK/tip-tune
Length of output: 1733
🏁 Script executed:
rg -n "app.use" backend/src/main.ts -B 2 -A 2Repository: OlufunbiIK/tip-tune
Length of output: 455
🏁 Script executed:
sed -n '1,60p' backend/src/main.ts | cat -nRepository: OlufunbiIK/tip-tune
Length of output: 2205
🏁 Script executed:
# Check if there are any route handlers that show the actual path structure
rg -n "@Get\|@Post\|@Controller" backend/src/storage/storage.controller.ts -A 1 | head -40Repository: OlufunbiIK/tip-tune
Length of output: 45
🏁 Script executed:
cat -n backend/src/storage/storage.controller.ts | head -60Repository: OlufunbiIK/tip-tune
Length of output: 2215
🏁 Script executed:
# Let me check if there are any route decorators with explicit version info
rg -n "@Version\|Version:" backend/src/storage/storage.controller.tsRepository: OlufunbiIK/tip-tune
Length of output: 45
🏁 Script executed:
# Check what the URI versioning actually produces
# Look for any tests or documentation about the version format
rg -n "v1\|v2\|/1/\|/2/" backend/src --type ts -A 1 -B 1 | head -30Repository: OlufunbiIK/tip-tune
Length of output: 45
🏁 Script executed:
# Check if there are API route examples or tests that show actual request paths
find backend/src -name "*.spec.ts" -o -name "*.test.ts" | head -5 | xargs ls -laRepository: OlufunbiIK/tip-tune
Length of output: 585
🏁 Script executed:
# Check if there are any existing tests for the compression middleware
find backend/src -name "*compression*" -o -name "*middleware*" | xargs ls -la 2>/dev/nullRepository: OlufunbiIK/tip-tune
Length of output: 692
🏁 Script executed:
# Let me verify how NestJS URI versioning actually formats the path
# Check the main.ts again for the default version format
grep -n "defaultVersion\|API_VERSION" backend/src/main.tsRepository: OlufunbiIK/tip-tune
Length of output: 173
Binary endpoint patterns must include the /api global prefix to match requests.
The compression middleware is registered via app.use() at line 22 of main.ts, which operates on the full request path including the global prefix. Since setGlobalPrefix('api') is applied at line 45, all StorageController routes become /api/files/.... However, the current patterns in BINARY_ENDPOINT_RULES only match /files/... without the prefix.
When a client requests /api/files/upload, the pattern /\/files\/upload\/?$/ will not match, causing binary endpoints to be compressed when they should not be.
🐛 Proposed fix
const BINARY_ENDPOINT_RULES: BinaryEndpointRule[] = [
- { method: 'POST', pattern: /\/files\/upload\/?$/ },
- { method: 'GET', pattern: /\/files\/[^/]+\/stream\/?$/ },
- { method: 'GET', pattern: /\/files\/[^/]+\/?$/ },
+ { method: 'POST', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/upload\/?$/ },
+ { method: 'GET', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/[^/]+\/stream\/?$/ },
+ { method: 'GET', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/[^/]+\/?$/ },
];The optional segments handle the api/ prefix and URI versioning segment (v1/, etc.) configured in main.ts.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const BINARY_ENDPOINT_RULES: BinaryEndpointRule[] = [ | |
| { method: 'POST', pattern: /\/files\/upload\/?$/ }, | |
| { method: 'GET', pattern: /\/files\/[^/]+\/stream\/?$/ }, | |
| { method: 'GET', pattern: /\/files\/[^/]+\/?$/ }, | |
| ]; | |
| const BINARY_ENDPOINT_RULES: BinaryEndpointRule[] = [ | |
| { method: 'POST', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/upload\/?$/ }, | |
| { method: 'GET', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/[^/]+\/stream\/?$/ }, | |
| { method: 'GET', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/[^/]+\/?$/ }, | |
| ]; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/common/middleware/response-compression.middleware.ts` around
lines 10 - 14, BINARY_ENDPOINT_RULES currently only matches paths like
/files/... but the app uses a global prefix (setGlobalPrefix('api')) so requests
are /api/files/..., causing misclassification; update the regex patterns in
BINARY_ENDPOINT_RULES (the three entries) to include an optional leading "/api"
and optional version segment (e.g. ^\/api(?:\/v\d+)?\/files\/upload\/?$ etc.) so
the POST/GET rules for upload, stream and file lookup match the actual request
paths handled by the compression middleware.
| import * as request from 'supertest'; | ||
| import * as compression from 'compression'; | ||
| import * as zlib from 'zlib'; | ||
| import { createCompressionMiddleware } from '../src/common/middleware/response-compression.middleware'; |
There was a problem hiding this comment.
This only tests the helper, not the app integration.
Because this suite builds a controller-only app and mounts createCompressionMiddleware() itself, it would still pass if backend/src/main.ts or backend/src/app.module.ts stopped wiring compression in production. For this PR’s acceptance criteria, at least one e2e path should go through the real app/module bootstrap.
Also applies to: 97-104
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/test/compression-integration.e2e-spec.ts` at line 5, The test
currently mounts createCompressionMiddleware() directly and only exercises the
helper, so update the e2e spec to start the real application bootstrap (the
actual app/module used in production) for at least one test path so compression
wiring in backend/src/main.ts or AppModule is validated; specifically replace
the controller-only app setup with the production bootstrap (or import and call
the existing bootstrap/createApp function) for one of the test cases (also
update the tests around lines 97-104 mentioned in the comment) so requests flow
through the real module wiring and middleware instead of directly injecting
createCompressionMiddleware().
| @Post('files/upload') | ||
| uploadFile() { | ||
| return Buffer.from('binary-upload-response'); | ||
| } | ||
|
|
||
| @Get('tracks/:id/download') | ||
| downloadTrack(@Param('id') id: string) { | ||
| // Simulate binary download endpoint | ||
| return Buffer.from('fake-audio-file-binary-content'); | ||
| @Get('files/:filename') | ||
| downloadFile(@Param('filename') _filename: string) { | ||
| return Buffer.from('binary-download-response'); | ||
| } | ||
|
|
||
| @Get('storage/:path/download') | ||
| downloadFile(@Param('path') path: string) { | ||
| // Simulate storage download endpoint | ||
| return Buffer.from('fake-file-binary-content'); | ||
| @Get('files/:filename/stream') | ||
| streamFile(@Param('filename') _filename: string) { | ||
| return Buffer.from('binary-stream-response'); |
There was a problem hiding this comment.
The binary-route check is masked by the 1KB threshold.
These payloads are tiny, so content-encoding stays empty even when the binary-endpoint exclusion is broken. Make the file responses exceed the threshold and keep them truly binary, otherwise this spec is only proving “small responses are not compressed,” not that the binary routes are excluded.
Also applies to: 144-162
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/test/compression-integration.e2e-spec.ts` around lines 52 - 64, The
test's binary routes (uploadFile, downloadFile, streamFile and the other
handlers around lines 144-162) return tiny text-like Buffers so compression
remains off due to the 1KB threshold and doesn't validate the binary-route
exclusion; change those handlers to return >1KB of non-text binary data (e.g., a
randomized or repeated byte pattern Buffer) so responses are genuinely binary
and exceed the compression threshold, then rerun the spec to ensure
content-encoding remains empty for these endpoints.
| it('achieves at least 40% size reduction for target large-list endpoints', async () => { | ||
| for (const endpoint of endpointCases) { | ||
| const baselineResponse = await request(app.getHttpServer()) | ||
| .get(endpoint.path) | ||
| .set('Accept-Encoding', 'identity') | ||
| .expect(200); | ||
|
|
||
| // Identity means no compression | ||
| expect(response.headers['content-encoding']).toBeUndefined(); | ||
|
|
||
| // Vary header should still be present | ||
| expect(response.headers['vary']).toBeDefined(); | ||
| }); | ||
|
|
||
| it('should verify Content-Encoding matches the algorithm used', async () => { | ||
| const brotliResponse = await request(app.getHttpServer()) | ||
| .get('/test-compression/large') | ||
| .set('Accept-Encoding', 'br') | ||
| .buffer(true) | ||
| .parse((res, callback) => { | ||
| let data = Buffer.from(''); | ||
| res.on('data', (chunk) => { | ||
| data = Buffer.concat([data, chunk]); | ||
| }); | ||
| res.on('end', () => { | ||
| callback(null, data); | ||
| }); | ||
| }); | ||
|
|
||
| expect(brotliResponse.headers['content-encoding']).toBe('br'); | ||
|
|
||
| const gzipResponse = await request(app.getHttpServer()) | ||
| .get('/test-compression/large') | ||
| .set('Accept-Encoding', 'gzip') | ||
| .buffer(true) | ||
| .parse((res, callback) => { | ||
| let data = Buffer.from(''); | ||
| res.on('data', (chunk) => { | ||
| data = Buffer.concat([data, chunk]); | ||
| }); | ||
| res.on('end', () => { | ||
| callback(null, data); | ||
| }); | ||
| }); | ||
|
|
||
| expect(gzipResponse.headers['content-encoding']).toBe('gzip'); | ||
| }); | ||
| const baselinePayload = Buffer.from(JSON.stringify(baselineResponse.body)); | ||
| const originalSize = baselinePayload.length; | ||
| const brotliSize = zlib.brotliCompressSync(baselinePayload, { | ||
| params: { | ||
| [zlib.constants.BROTLI_PARAM_QUALITY]: 4, | ||
| }, | ||
| }).length; | ||
| const gzipSize = zlib.gzipSync(baselinePayload).length; | ||
| const brotliRatio = 1 - brotliSize / originalSize; | ||
| const gzipRatio = 1 - gzipSize / originalSize; | ||
| expect(brotliRatio).toBeGreaterThanOrEqual(0.4); | ||
| expect(gzipRatio).toBeGreaterThanOrEqual(0.4); | ||
| } |
There was a problem hiding this comment.
The 40% assertion is measuring local zlib, not the server response.
Right now this mostly proves the synthetic payload is compressible. The loop never fetches the br/gzip variants for these endpoints, so it can pass even if /tips/history, /analytics, /search, or /activities/feed are served uncompressed. Compare the raw response bytes for identity, br, and gzip instead.
Suggested direction
it('achieves at least 40% size reduction for target large-list endpoints', async () => {
for (const endpoint of endpointCases) {
const baselineResponse = await request(app.getHttpServer())
.get(endpoint.path)
.set('Accept-Encoding', 'identity')
+ .buffer(true)
+ .parse(parseRawBuffer)
.expect(200);
- const baselinePayload = Buffer.from(JSON.stringify(baselineResponse.body));
- const originalSize = baselinePayload.length;
- const brotliSize = zlib.brotliCompressSync(baselinePayload, {
- params: {
- [zlib.constants.BROTLI_PARAM_QUALITY]: 4,
- },
- }).length;
- const gzipSize = zlib.gzipSync(baselinePayload).length;
+ const brResponse = await request(app.getHttpServer())
+ .get(endpoint.path)
+ .set('Accept-Encoding', 'br')
+ .buffer(true)
+ .parse(parseRawBuffer)
+ .expect(200);
+
+ const gzipResponse = await request(app.getHttpServer())
+ .get(endpoint.path)
+ .set('Accept-Encoding', 'gzip')
+ .buffer(true)
+ .parse(parseRawBuffer)
+ .expect(200);
+
+ expect(brResponse.headers['content-encoding']).toBe('br');
+ expect(gzipResponse.headers['content-encoding']).toBe('gzip');
+
+ const originalSize = baselineResponse.body.length;
+ const brotliSize = brResponse.body.length;
+ const gzipSize = gzipResponse.body.length;
const brotliRatio = 1 - brotliSize / originalSize;
const gzipRatio = 1 - gzipSize / originalSize;
expect(brotliRatio).toBeGreaterThanOrEqual(0.4);
expect(gzipRatio).toBeGreaterThanOrEqual(0.4);
}
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/test/compression-integration.e2e-spec.ts` around lines 165 - 184, The
test currently computes compression ratios by compressing the local JSON payload
with zlib instead of measuring the server's actual responses; change the loop
over endpointCases to perform three real requests per endpoint (Accept-Encoding:
identity, br, gzip), capture the raw response bytes of each server response
(ensure you fetch raw/buffered bodies, not parsed JSON), compute sizes from
those response buffers and then compute brotliRatio and gzipRatio as 1 -
compressedSize / originalSize using the server-returned 'br' and 'gzip' response
bodies, and assert those ratios are >= 0.4; update references around the test
name and variables endpointCases, baselineResponse, brotliSize, gzipSize,
brotliRatio and gzipRatio accordingly.
|
LGTM |
Closes #206
Summary by CodeRabbit
New Features
Tests