Skip to content

Issue: Add Response Compression Middleware#244

Merged
OlufunbiIK merged 1 commit intoOlufunbiIK:mainfrom
robertocarlous:feat/Add-Response-Compression-Middleware
Mar 23, 2026
Merged

Issue: Add Response Compression Middleware#244
OlufunbiIK merged 1 commit intoOlufunbiIK:mainfrom
robertocarlous:feat/Add-Response-Compression-Middleware

Conversation

@robertocarlous
Copy link
Contributor

@robertocarlous robertocarlous commented Mar 23, 2026

Closes #206

Summary by CodeRabbit

  • New Features

    • Enabled response compression for API endpoints using brotli and gzip algorithms to reduce bandwidth usage and improve response times.
    • Configured proper HTTP headers for compression compatibility and cache optimization.
    • Excluded binary upload and download endpoints from compression processing.
  • Tests

    • Updated compression tests to verify encoding effectiveness and HTTP header handling.

@vercel
Copy link

vercel bot commented Mar 23, 2026

@robertocarlous is attempting to deploy a commit to the olufunbiik's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link

coderabbitai bot commented Mar 23, 2026

📝 Walkthrough

Walkthrough

Added response compression middleware to the NestJS backend supporting gzip and brotli encoding with configurable 1KB threshold and binary endpoint exclusion. Implemented middleware registration in app bootstrap and middleware consumer, updated dependencies, added integration tests validating compression effectiveness, and documented measurement results.

Changes

Cohort / File(s) Summary
Configuration & Dependencies
backend/package.json
Added compression and @types/compression dependencies for response compression support.
Middleware Implementation
backend/src/common/middleware/response-compression.middleware.ts, backend/src/common/middleware/vary-accept-encoding.middleware.ts
Created compression middleware factory (createCompressionMiddleware) with binary endpoint detection via route pattern matching, 1KB threshold, and brotli/gzip configuration; added Vary header middleware to set Accept-Encoding in response headers.
App Bootstrap & Setup
backend/src/main.ts, backend/src/app.module.ts
Registered global compression middleware via app.use(createCompressionMiddleware()) in main.ts; updated AppModule to implement NestModule with middleware consumer configuration for Vary header on all routes.
Documentation & Measurements
backend/COMPRESSION_MEASUREMENTS.md
Added compression measurement documentation specifying test payload categories, compression settings (brotli quality, gzip fallback, 1KB threshold), comparison methodology, and results table confirming ≥40% size reduction across target endpoints.
Test Refactoring
backend/test/compression-integration.e2e-spec.ts, backend/test/compression-vary-header.e2e-spec.ts, backend/test-compression-config.js
Refactored integration tests to use middleware factory; replaced prior test routes with JSON list endpoints (/tracks, /tips/history, /analytics, /search, /activities/feed) and binary file routes under /test-compression/files/*; added assertions for Content-Encoding headers, Vary header presence, binary endpoint exclusion, and compression ratio validation (≥40% reduction); updated config test to verify middleware factory integration and binary endpoint pattern matching.

Sequence Diagram

sequenceDiagram
    participant Client
    participant NestApp as NestJS App
    participant VaryMW as VaryAcceptEncodingMiddleware
    participant CompressionMW as CompressionMiddleware
    participant Handler as Route Handler
    participant Network

    Client->>NestApp: GET /api/endpoint<br/>Accept-Encoding: br, gzip
    NestApp->>VaryMW: Invoke middleware
    VaryMW->>VaryMW: Set Vary: Accept-Encoding
    VaryMW->>CompressionMW: next()
    CompressionMW->>CompressionMW: shouldCompress(req, res)
    alt Binary Endpoint (e.g., /files/upload)
        CompressionMW->>Handler: Skip compression filter
    else JSON Endpoint
        CompressionMW->>Handler: Apply compression filter
    end
    Handler->>Handler: Generate response
    Handler->>CompressionMW: Response stream
    alt Response ≥ 1KB & not binary
        CompressionMW->>CompressionMW: Select Brotli (preferred)<br/>or Gzip (fallback)
        CompressionMW->>CompressionMW: Compress payload
    else Response < 1KB or binary
        CompressionMW->>CompressionMW: Skip compression
    end
    CompressionMW->>Network: Content-Encoding: br/gzip<br/>Vary: Accept-Encoding
    Network->>Client: Compressed response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

  • feat: Add HTTP response compression with Brotli and Gzip support #238: Implements the identical response-compression feature, including middleware factory pattern (createCompressionMiddleware), Vary header handling, binary endpoint detection, compression threshold configuration, and the same integration and config validation tests across all modified files.

Poem

🐰 Compression hops in with mighty might,
Brotli breezes, gzip shines bright,
Responses shrink by forty percent,
Bandwidth saved and files well-spent!
Vary headers wave their elegant dance,
Caches rejoice at this second chance!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding response compression middleware to the backend.
Linked Issues check ✅ Passed The PR implements all key requirements: compression middleware with brotli/gzip, 1KB threshold, binary endpoint exclusion, Vary header, and 40% size reduction documentation.
Out of Scope Changes check ✅ Passed All changes are directly related to implementing response compression middleware as specified in issue #206.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@robertocarlous
Copy link
Contributor Author

@OlufunbiIK pls review

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/test-compression-config.js (1)

46-56: ⚠️ Potential issue | 🟡 Minor

Middleware positioning check uses outdated pattern.

Line 48 searches for app.use(compression( but the actual registration in main.ts uses app.use(createCompressionMiddleware()). This will cause compressionIndex to be -1, making correctPositioning always false even when the middleware is correctly positioned.

🐛 Proposed fix to use correct pattern
 // Check 4: Middleware positioning (after CORS, before routes)
 const corsIndex = mainTsContent.indexOf('enableCors');
-const compressionIndex = mainTsContent.indexOf('app.use(compression(');
+const compressionIndex = mainTsContent.indexOf('app.use(createCompressionMiddleware()');
 const routesIndex = mainTsContent.indexOf('setGlobalPrefix');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/test-compression-config.js` around lines 46 - 56, The middleware
positioning check fails because compressionIndex searches for
"app.use(compression(" but the app actually registers compression as
"app.use(createCompressionMiddleware())"; update the search used to compute
compressionIndex in backend/test-compression-config.js to look for the actual
registration string (e.g., "app.use(createCompressionMiddleware())" or a more
robust substring like "createCompressionMiddleware") so compressionIndex
reflects the real position in mainTsContent, leaving corsIndex, routesIndex and
the correctPositioning logic unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/src/common/middleware/response-compression.middleware.ts`:
- Around line 10-14: BINARY_ENDPOINT_RULES currently only matches paths like
/files/... but the app uses a global prefix (setGlobalPrefix('api')) so requests
are /api/files/..., causing misclassification; update the regex patterns in
BINARY_ENDPOINT_RULES (the three entries) to include an optional leading "/api"
and optional version segment (e.g. ^\/api(?:\/v\d+)?\/files\/upload\/?$ etc.) so
the POST/GET rules for upload, stream and file lookup match the actual request
paths handled by the compression middleware.

In `@backend/test/compression-integration.e2e-spec.ts`:
- Around line 52-64: The test's binary routes (uploadFile, downloadFile,
streamFile and the other handlers around lines 144-162) return tiny text-like
Buffers so compression remains off due to the 1KB threshold and doesn't validate
the binary-route exclusion; change those handlers to return >1KB of non-text
binary data (e.g., a randomized or repeated byte pattern Buffer) so responses
are genuinely binary and exceed the compression threshold, then rerun the spec
to ensure content-encoding remains empty for these endpoints.
- Line 5: The test currently mounts createCompressionMiddleware() directly and
only exercises the helper, so update the e2e spec to start the real application
bootstrap (the actual app/module used in production) for at least one test path
so compression wiring in backend/src/main.ts or AppModule is validated;
specifically replace the controller-only app setup with the production bootstrap
(or import and call the existing bootstrap/createApp function) for one of the
test cases (also update the tests around lines 97-104 mentioned in the comment)
so requests flow through the real module wiring and middleware instead of
directly injecting createCompressionMiddleware().
- Around line 165-184: The test currently computes compression ratios by
compressing the local JSON payload with zlib instead of measuring the server's
actual responses; change the loop over endpointCases to perform three real
requests per endpoint (Accept-Encoding: identity, br, gzip), capture the raw
response bytes of each server response (ensure you fetch raw/buffered bodies,
not parsed JSON), compute sizes from those response buffers and then compute
brotliRatio and gzipRatio as 1 - compressedSize / originalSize using the
server-returned 'br' and 'gzip' response bodies, and assert those ratios are >=
0.4; update references around the test name and variables endpointCases,
baselineResponse, brotliSize, gzipSize, brotliRatio and gzipRatio accordingly.

---

Outside diff comments:
In `@backend/test-compression-config.js`:
- Around line 46-56: The middleware positioning check fails because
compressionIndex searches for "app.use(compression(" but the app actually
registers compression as "app.use(createCompressionMiddleware())"; update the
search used to compute compressionIndex in backend/test-compression-config.js to
look for the actual registration string (e.g.,
"app.use(createCompressionMiddleware())" or a more robust substring like
"createCompressionMiddleware") so compressionIndex reflects the real position in
mainTsContent, leaving corsIndex, routesIndex and the correctPositioning logic
unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 949db7e5-bbde-4b3b-abdc-c605d94ec45c

📥 Commits

Reviewing files that changed from the base of the PR and between c72d5cf and 2b74923.

⛔ Files ignored due to path filters (1)
  • backend/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (9)
  • backend/COMPRESSION_MEASUREMENTS.md
  • backend/package.json
  • backend/src/app.module.ts
  • backend/src/common/middleware/response-compression.middleware.ts
  • backend/src/common/middleware/vary-accept-encoding.middleware.ts
  • backend/src/main.ts
  • backend/test-compression-config.js
  • backend/test/compression-integration.e2e-spec.ts
  • backend/test/compression-vary-header.e2e-spec.ts

Comment on lines +10 to +14
const BINARY_ENDPOINT_RULES: BinaryEndpointRule[] = [
{ method: 'POST', pattern: /\/files\/upload\/?$/ },
{ method: 'GET', pattern: /\/files\/[^/]+\/stream\/?$/ },
{ method: 'GET', pattern: /\/files\/[^/]+\/?$/ },
];
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the global prefix and versioning configuration in main.ts
echo "=== Checking setGlobalPrefix usage ==="
rg -n "setGlobalPrefix" backend/src/main.ts -A 5

echo ""
echo "=== Checking enableVersioning usage ==="
rg -n "enableVersioning" backend/src/main.ts -A 5

echo ""
echo "=== Checking StorageController route definition ==="
rg -n "@Controller" backend/src/storage/storage.controller.ts -A 2

Repository: OlufunbiIK/tip-tune

Length of output: 696


🏁 Script executed:

cat -n backend/src/common/middleware/response-compression.middleware.ts

Repository: OlufunbiIK/tip-tune

Length of output: 1515


🏁 Script executed:

rg -n "createCompressionMiddleware\|compression" backend/src/main.ts -B 3 -A 3

Repository: OlufunbiIK/tip-tune

Length of output: 45


🏁 Script executed:

rg -n "compression" backend/src -A 2 -B 2

Repository: OlufunbiIK/tip-tune

Length of output: 1733


🏁 Script executed:

rg -n "app.use" backend/src/main.ts -B 2 -A 2

Repository: OlufunbiIK/tip-tune

Length of output: 455


🏁 Script executed:

sed -n '1,60p' backend/src/main.ts | cat -n

Repository: OlufunbiIK/tip-tune

Length of output: 2205


🏁 Script executed:

# Check if there are any route handlers that show the actual path structure
rg -n "@Get\|@Post\|@Controller" backend/src/storage/storage.controller.ts -A 1 | head -40

Repository: OlufunbiIK/tip-tune

Length of output: 45


🏁 Script executed:

cat -n backend/src/storage/storage.controller.ts | head -60

Repository: OlufunbiIK/tip-tune

Length of output: 2215


🏁 Script executed:

# Let me check if there are any route decorators with explicit version info
rg -n "@Version\|Version:" backend/src/storage/storage.controller.ts

Repository: OlufunbiIK/tip-tune

Length of output: 45


🏁 Script executed:

# Check what the URI versioning actually produces
# Look for any tests or documentation about the version format
rg -n "v1\|v2\|/1/\|/2/" backend/src --type ts -A 1 -B 1 | head -30

Repository: OlufunbiIK/tip-tune

Length of output: 45


🏁 Script executed:

# Check if there are API route examples or tests that show actual request paths
find backend/src -name "*.spec.ts" -o -name "*.test.ts" | head -5 | xargs ls -la

Repository: OlufunbiIK/tip-tune

Length of output: 585


🏁 Script executed:

# Check if there are any existing tests for the compression middleware
find backend/src -name "*compression*" -o -name "*middleware*" | xargs ls -la 2>/dev/null

Repository: OlufunbiIK/tip-tune

Length of output: 692


🏁 Script executed:

# Let me verify how NestJS URI versioning actually formats the path
# Check the main.ts again for the default version format
grep -n "defaultVersion\|API_VERSION" backend/src/main.ts

Repository: OlufunbiIK/tip-tune

Length of output: 173


Binary endpoint patterns must include the /api global prefix to match requests.

The compression middleware is registered via app.use() at line 22 of main.ts, which operates on the full request path including the global prefix. Since setGlobalPrefix('api') is applied at line 45, all StorageController routes become /api/files/.... However, the current patterns in BINARY_ENDPOINT_RULES only match /files/... without the prefix.

When a client requests /api/files/upload, the pattern /\/files\/upload\/?$/ will not match, causing binary endpoints to be compressed when they should not be.

🐛 Proposed fix
 const BINARY_ENDPOINT_RULES: BinaryEndpointRule[] = [
-  { method: 'POST', pattern: /\/files\/upload\/?$/ },
-  { method: 'GET', pattern: /\/files\/[^/]+\/stream\/?$/ },
-  { method: 'GET', pattern: /\/files\/[^/]+\/?$/ },
+  { method: 'POST', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/upload\/?$/ },
+  { method: 'GET', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/[^/]+\/stream\/?$/ },
+  { method: 'GET', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/[^/]+\/?$/ },
 ];

The optional segments handle the api/ prefix and URI versioning segment (v1/, etc.) configured in main.ts.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const BINARY_ENDPOINT_RULES: BinaryEndpointRule[] = [
{ method: 'POST', pattern: /\/files\/upload\/?$/ },
{ method: 'GET', pattern: /\/files\/[^/]+\/stream\/?$/ },
{ method: 'GET', pattern: /\/files\/[^/]+\/?$/ },
];
const BINARY_ENDPOINT_RULES: BinaryEndpointRule[] = [
{ method: 'POST', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/upload\/?$/ },
{ method: 'GET', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/[^/]+\/stream\/?$/ },
{ method: 'GET', pattern: /\/(?:api\/)?(?:v\d+\/)?files\/[^/]+\/?$/ },
];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/common/middleware/response-compression.middleware.ts` around
lines 10 - 14, BINARY_ENDPOINT_RULES currently only matches paths like
/files/... but the app uses a global prefix (setGlobalPrefix('api')) so requests
are /api/files/..., causing misclassification; update the regex patterns in
BINARY_ENDPOINT_RULES (the three entries) to include an optional leading "/api"
and optional version segment (e.g. ^\/api(?:\/v\d+)?\/files\/upload\/?$ etc.) so
the POST/GET rules for upload, stream and file lookup match the actual request
paths handled by the compression middleware.

import * as request from 'supertest';
import * as compression from 'compression';
import * as zlib from 'zlib';
import { createCompressionMiddleware } from '../src/common/middleware/response-compression.middleware';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This only tests the helper, not the app integration.

Because this suite builds a controller-only app and mounts createCompressionMiddleware() itself, it would still pass if backend/src/main.ts or backend/src/app.module.ts stopped wiring compression in production. For this PR’s acceptance criteria, at least one e2e path should go through the real app/module bootstrap.

Also applies to: 97-104

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/test/compression-integration.e2e-spec.ts` at line 5, The test
currently mounts createCompressionMiddleware() directly and only exercises the
helper, so update the e2e spec to start the real application bootstrap (the
actual app/module used in production) for at least one test path so compression
wiring in backend/src/main.ts or AppModule is validated; specifically replace
the controller-only app setup with the production bootstrap (or import and call
the existing bootstrap/createApp function) for one of the test cases (also
update the tests around lines 97-104 mentioned in the comment) so requests flow
through the real module wiring and middleware instead of directly injecting
createCompressionMiddleware().

Comment on lines +52 to +64
@Post('files/upload')
uploadFile() {
return Buffer.from('binary-upload-response');
}

@Get('tracks/:id/download')
downloadTrack(@Param('id') id: string) {
// Simulate binary download endpoint
return Buffer.from('fake-audio-file-binary-content');
@Get('files/:filename')
downloadFile(@Param('filename') _filename: string) {
return Buffer.from('binary-download-response');
}

@Get('storage/:path/download')
downloadFile(@Param('path') path: string) {
// Simulate storage download endpoint
return Buffer.from('fake-file-binary-content');
@Get('files/:filename/stream')
streamFile(@Param('filename') _filename: string) {
return Buffer.from('binary-stream-response');
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The binary-route check is masked by the 1KB threshold.

These payloads are tiny, so content-encoding stays empty even when the binary-endpoint exclusion is broken. Make the file responses exceed the threshold and keep them truly binary, otherwise this spec is only proving “small responses are not compressed,” not that the binary routes are excluded.

Also applies to: 144-162

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/test/compression-integration.e2e-spec.ts` around lines 52 - 64, The
test's binary routes (uploadFile, downloadFile, streamFile and the other
handlers around lines 144-162) return tiny text-like Buffers so compression
remains off due to the 1KB threshold and doesn't validate the binary-route
exclusion; change those handlers to return >1KB of non-text binary data (e.g., a
randomized or repeated byte pattern Buffer) so responses are genuinely binary
and exceed the compression threshold, then rerun the spec to ensure
content-encoding remains empty for these endpoints.

Comment on lines +165 to +184
it('achieves at least 40% size reduction for target large-list endpoints', async () => {
for (const endpoint of endpointCases) {
const baselineResponse = await request(app.getHttpServer())
.get(endpoint.path)
.set('Accept-Encoding', 'identity')
.expect(200);

// Identity means no compression
expect(response.headers['content-encoding']).toBeUndefined();

// Vary header should still be present
expect(response.headers['vary']).toBeDefined();
});

it('should verify Content-Encoding matches the algorithm used', async () => {
const brotliResponse = await request(app.getHttpServer())
.get('/test-compression/large')
.set('Accept-Encoding', 'br')
.buffer(true)
.parse((res, callback) => {
let data = Buffer.from('');
res.on('data', (chunk) => {
data = Buffer.concat([data, chunk]);
});
res.on('end', () => {
callback(null, data);
});
});

expect(brotliResponse.headers['content-encoding']).toBe('br');

const gzipResponse = await request(app.getHttpServer())
.get('/test-compression/large')
.set('Accept-Encoding', 'gzip')
.buffer(true)
.parse((res, callback) => {
let data = Buffer.from('');
res.on('data', (chunk) => {
data = Buffer.concat([data, chunk]);
});
res.on('end', () => {
callback(null, data);
});
});

expect(gzipResponse.headers['content-encoding']).toBe('gzip');
});
const baselinePayload = Buffer.from(JSON.stringify(baselineResponse.body));
const originalSize = baselinePayload.length;
const brotliSize = zlib.brotliCompressSync(baselinePayload, {
params: {
[zlib.constants.BROTLI_PARAM_QUALITY]: 4,
},
}).length;
const gzipSize = zlib.gzipSync(baselinePayload).length;
const brotliRatio = 1 - brotliSize / originalSize;
const gzipRatio = 1 - gzipSize / originalSize;
expect(brotliRatio).toBeGreaterThanOrEqual(0.4);
expect(gzipRatio).toBeGreaterThanOrEqual(0.4);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The 40% assertion is measuring local zlib, not the server response.

Right now this mostly proves the synthetic payload is compressible. The loop never fetches the br/gzip variants for these endpoints, so it can pass even if /tips/history, /analytics, /search, or /activities/feed are served uncompressed. Compare the raw response bytes for identity, br, and gzip instead.

Suggested direction
  it('achieves at least 40% size reduction for target large-list endpoints', async () => {
    for (const endpoint of endpointCases) {
      const baselineResponse = await request(app.getHttpServer())
        .get(endpoint.path)
        .set('Accept-Encoding', 'identity')
+       .buffer(true)
+       .parse(parseRawBuffer)
        .expect(200);

-      const baselinePayload = Buffer.from(JSON.stringify(baselineResponse.body));
-      const originalSize = baselinePayload.length;
-      const brotliSize = zlib.brotliCompressSync(baselinePayload, {
-        params: {
-          [zlib.constants.BROTLI_PARAM_QUALITY]: 4,
-        },
-      }).length;
-      const gzipSize = zlib.gzipSync(baselinePayload).length;
+      const brResponse = await request(app.getHttpServer())
+        .get(endpoint.path)
+        .set('Accept-Encoding', 'br')
+        .buffer(true)
+        .parse(parseRawBuffer)
+        .expect(200);
+
+      const gzipResponse = await request(app.getHttpServer())
+        .get(endpoint.path)
+        .set('Accept-Encoding', 'gzip')
+        .buffer(true)
+        .parse(parseRawBuffer)
+        .expect(200);
+
+      expect(brResponse.headers['content-encoding']).toBe('br');
+      expect(gzipResponse.headers['content-encoding']).toBe('gzip');
+
+      const originalSize = baselineResponse.body.length;
+      const brotliSize = brResponse.body.length;
+      const gzipSize = gzipResponse.body.length;

       const brotliRatio = 1 - brotliSize / originalSize;
       const gzipRatio = 1 - gzipSize / originalSize;
       expect(brotliRatio).toBeGreaterThanOrEqual(0.4);
       expect(gzipRatio).toBeGreaterThanOrEqual(0.4);
     }
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/test/compression-integration.e2e-spec.ts` around lines 165 - 184, The
test currently computes compression ratios by compressing the local JSON payload
with zlib instead of measuring the server's actual responses; change the loop
over endpointCases to perform three real requests per endpoint (Accept-Encoding:
identity, br, gzip), capture the raw response bytes of each server response
(ensure you fetch raw/buffered bodies, not parsed JSON), compute sizes from
those response buffers and then compute brotliRatio and gzipRatio as 1 -
compressedSize / originalSize using the server-returned 'br' and 'gzip' response
bodies, and assert those ratios are >= 0.4; update references around the test
name and variables endpointCases, baselineResponse, brotliSize, gzipSize,
brotliRatio and gzipRatio accordingly.

@OlufunbiIK
Copy link
Owner

LGTM

@OlufunbiIK OlufunbiIK merged commit 27237c5 into OlufunbiIK:main Mar 23, 2026
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Issue: Add Response Compression Middleware

2 participants