Skip to content

Conversation

miquelruiz
Copy link

@miquelruiz miquelruiz commented Sep 15, 2025

Vhost-style URLs for S3 have been an option for a while, yet fluent-bit exclusively supports path-style URL's, which forbids it to work with the S3 FIPS endpoints according to the documentation.

This PR adds optional, opt-in support for vhost-style URLs on a per-output basis.

Fixes #10390.


Testing
Before we can approve your change; please submit the following in a comment:

  • Example configuration file for the change
---
pipeline:
  inputs:
    - name: random
      samples: -1
      interval_sec: 1
      interval_nsec: 0
  outputs:
    - name:  s3
      bucket: my-bucket
      region: us-east-1
      vhost_style_urls: true
  • Debug log output from testing the change
$ bin/fluent-bit -c ../xxx.yml 
Fluent Bit v4.1.0
* Copyright (C) 2015-2025 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

______ _                  _    ______ _ _             ___   __  
|  ___| |                | |   | ___ (_) |           /   | /  | 
| |_  | |_   _  ___ _ __ | |_  | |_/ /_| |_  __   __/ /| | `| | 
|  _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| |  | | 
| |   | | |_| |  __/ | | | |_  | |_/ / | |_   \ V /\___  |__| |_
\_|   |_|\__,_|\___|_| |_|\__| \____/|_|\__|   \_/     |_(_)___/


[2025/09/15 16:43:00.869367668] [ info] Configuration:
[2025/09/15 16:43:00.869414328] [ info]  flush time     | 1.000000 seconds
[2025/09/15 16:43:00.869418469] [ info]  grace          | 5 seconds
[2025/09/15 16:43:00.869420256] [ info]  daemon         | 0
[2025/09/15 16:43:00.869421794] [ info] ___________
[2025/09/15 16:43:00.869423521] [ info]  inputs:
[2025/09/15 16:43:00.869425073] [ info]      random
[2025/09/15 16:43:00.869429109] [ info] ___________
[2025/09/15 16:43:00.869431436] [ info]  filters:
[2025/09/15 16:43:00.869432929] [ info] ___________
[2025/09/15 16:43:00.869434451] [ info]  outputs:
[2025/09/15 16:43:00.869436621] [ info]      s3.0
[2025/09/15 16:43:00.869440437] [ info] ___________
[2025/09/15 16:43:00.869445062] [ info]  collectors:
[2025/09/15 16:43:00.870114445] [ info] [fluent bit] version=4.1.0, commit=7e50f95925, pid=52890
[2025/09/15 16:43:00.870128032] [debug] [engine] coroutine stack size: 24576 bytes (24.0K)
[2025/09/15 16:43:00.870276627] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2025/09/15 16:43:00.870341901] [ info] [simd    ] disabled
[2025/09/15 16:43:00.870344645] [ info] [cmetrics] version=1.0.5
[2025/09/15 16:43:00.870430418] [ info] [ctraces ] version=0.6.6
[2025/09/15 16:43:00.870518607] [ info] [input:random:random.0] initializing
[2025/09/15 16:43:00.870526738] [ info] [input:random:random.0] storage_strategy='memory' (memory only)
[2025/09/15 16:43:00.870535728] [debug] [random:random.0] created event channels: read=28 write=29
[2025/09/15 16:43:00.870672091] [debug] [input:random:random.0] interval_sec=1 interval_nsec=0
[2025/09/15 16:43:00.870685605] [debug] [s3:s3.0] created event channels: read=30 write=31
[2025/09/15 16:43:00.872993013] [ info] [output:s3:s3.0] Using upload size 100000000 bytes
[2025/09/15 16:43:00.873193355] [ info] [output:s3:s3.0] New endpoint: mruiz-potato.s3.us-east-1.amazonaws.com
[2025/09/15 16:43:00.873630262] [debug] [aws_credentials] Initialized Env Provider in standard chain
[2025/09/15 16:43:00.873637981] [debug] [aws_credentials] creating profile (null) provider
[2025/09/15 16:43:00.873715397] [debug] [aws_credentials] Initialized AWS Profile Provider in standard chain
[2025/09/15 16:43:00.873731598] [debug] [aws_credentials] Not initializing EKS provider because AWS_ROLE_ARN was not set
[2025/09/15 16:43:00.873740706] [debug] [aws_credentials] Not initializing ECS/EKS HTTP Provider because AWS_CONTAINER_CREDENTIALS_RELATIVE_URI and AWS_CONTAINER_CREDENTIALS_FULL_URI is not set
[2025/09/15 16:43:00.873747298] [debug] [aws_credentials] Initialized EC2 Provider in standard chain
[2025/09/15 16:43:00.873755650] [debug] [aws_credentials] Sync called on the EC2 provider
[2025/09/15 16:43:00.873761670] [debug] [aws_credentials] Init called on the env provider
[2025/09/15 16:43:00.873764884] [debug] [aws_credentials] upstream_set called on the EC2 provider
[2025/09/15 16:43:00.873968619] [ info] [output:s3:s3.0] initializing worker
[2025/09/15 16:43:00.874005387] [ info] [output:s3:s3.0] worker #0 started
[2025/09/15 16:43:00.874028270] [ info] [sp] stream processor started
[2025/09/15 16:43:00.874178029] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
[2025/09/15 16:43:01.899689407] [debug] [task] created task=0x7f10f40846b0 id=0 OK
[2025/09/15 16:43:01.899705102] [debug] [output:s3:s3.0] task_id=0 assigned to thread #0
[2025/09/15 16:43:01.899817574] [debug] [output:s3:s3.0] Creating upload timer with frequency 60s
[2025/09/15 16:43:01.900269435] [debug] [out flush] cb_destroy coro_id=0
[2025/09/15 16:43:01.900293789] [debug] [task] destroy task=0x7f10f40846b0 (task_id=0)
[2025/09/15 16:43:02.899711067] [debug] [task] created task=0x7f10f4084b30 id=0 OK
[2025/09/15 16:43:02.899731669] [debug] [output:s3:s3.0] task_id=0 assigned to thread #0
...
[2025/09/15 16:43:37.902558535] [debug] [out flush] cb_destroy coro_id=36
[2025/09/15 16:43:37.902663702] [debug] [task] destroy task=0x7f10f4084e70 (task_id=0)
^C[2025/09/15 16:43:38] [engine] caught signal (SIGINT)
[2025/09/15 16:43:38.155642726] [debug] [task] created task=0x7f10f4084e70 id=0 OK
[2025/09/15 16:43:38.155660279] [debug] [output:s3:s3.0] task_id=0 assigned to thread #0
[2025/09/15 16:43:38.155670487] [ warn] [engine] service will shutdown in max 5 seconds
[2025/09/15 16:43:38.155673830] [debug] [engine] task 0 already scheduled to run, not re-scheduling it.
[2025/09/15 16:43:38.155676494] [ info] [engine] pausing all inputs..
[2025/09/15 16:43:38.155680470] [ info] [input] pausing random.0
[2025/09/15 16:43:38.155708970] [debug] [out flush] cb_destroy coro_id=37
[2025/09/15 16:43:38.155739343] [debug] [task] destroy task=0x7f10f4084e70 (task_id=0)
[2025/09/15 16:43:38.902090586] [ info] [engine] service has stopped (0 pending tasks)
[2025/09/15 16:43:38.902108353] [ info] [input] pausing random.0
[2025/09/15 16:43:38.902146810] [ info] [output:s3:s3.0] thread worker #0 stopping...
[2025/09/15 16:43:38.902164540] [ info] [output:s3:s3.0] initializing worker
[2025/09/15 16:43:38.902217528] [ info] [output:s3:s3.0] thread worker #0 stopped
[2025/09/15 16:43:38.903099452] [ info] [output:s3:s3.0] Sending all locally buffered data to S3
[2025/09/15 16:43:39.292732905] [debug] [upstream] KA connection #32 to mruiz-potato.s3.us-east-1.amazonaws.com:443 is connected
[2025/09/15 16:43:39.292756712] [debug] [http_client] not using http_proxy for header
[2025/09/15 16:43:39.292773712] [debug] [aws_credentials] Requesting credentials from the env provider..
[2025/09/15 16:43:39.367545633] [debug] [upstream] KA connection #32 to mruiz-potato.s3.us-east-1.amazonaws.com:443 is now available
[2025/09/15 16:43:39.367561000] [debug] [output:s3:s3.0] PutObject http status=200
[2025/09/15 16:43:39.367564798] [ info] [output:s3:s3.0] Successfully uploaded object /fluent-bit-logs/random.0/2025/09/15/16/43/01-objectZjc9DKdy
  • Attached Valgrind output that shows no leaks or memory corruption was found
$ valgrind --leak-check=full ./bin/fluent-bit -c ../xxx.yml 
==53526== Memcheck, a memory error detector
==53526== Copyright (C) 2002-2022, and GNU GPL'd, by Julian Seward et al.
==53526== Using Valgrind-3.19.0 and LibVEX; rerun with -h for copyright info
==53526== Command: ./bin/fluent-bit -c ../xxx.yml
==53526== 
<regular logs, like the ones attached above>
==53526== 
==53526== HEAP SUMMARY:
==53526==     in use at exit: 0 bytes in 0 blocks
==53526==   total heap usage: 18,923 allocs, 18,923 frees, 3,608,422 bytes allocated
==53526== 
==53526== All heap blocks were freed -- no leaks are possible
==53526== 
==53526== For lists of detected and suppressed errors, rerun with: -s
==53526== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

Documentation

  • Documentation required for this feature

Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

Summary by CodeRabbit

  • New Features
    • Added optional support for virtual-hosted–style S3 URLs via the vhost_style_urls configuration (default: false). Applies to both single-part (PutObject) and multipart uploads, adjusting request paths and object key handling accordingly. Includes clearer logging of the final endpoint and object key when enabled.
  • Tests
    • Added runtime test to validate successful uploads using vhost_style_urls.

Copy link

coderabbitai bot commented Sep 15, 2025

Walkthrough

Refactors endpoint setup by adding init_endpoint and invoking it in initialization. Introduces a new vhost_style_urls config and struct field, and updates URI construction for PutObject and multipart operations to support virtual-host addressing. Adds a runtime test enabling vhost-style URLs. No external APIs changed beyond configuration.

Changes

Cohort / File(s) Summary
S3 core: endpoint and config
plugins/out_s3/s3.c, plugins/out_s3/s3.h
Added init_endpoint(...) to parse/build endpoint (scheme, host, port). Replaced inline endpoint logic with helper. Introduced config option and struct field vhost_style_urls (default false). Adjusted logging and state (ctx->endpoint, ctx->free_endpoint).
S3 core: URI building (PutObject + Multipart)
plugins/out_s3/s3.c, plugins/out_s3/s3_multipart.c
Updated URI construction to honor vhost_style_urls for PutObject and multipart (create/upload_part/complete). When enabled, URIs omit bucket prefix in path and rely on virtual-host style; path-style preserved otherwise. Handled final key extraction/logging accordingly.
Tests
tests/runtime/out_s3.c
Added runtime test flb_test_s3_vhost_urls and registered it. Configures S3 output with vhost_style_urls true and use_put_object true to validate flow.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant FB as Fluent Bit S3 Plugin
  participant Cfg as Config Map
  participant EP as init_endpoint()
  participant S3 as S3/OSS Service

  rect rgb(239,247,255)
  note over FB,Cfg: Initialization
  FB->>Cfg: Read settings (region, endpoint, vhost_style_urls)
  FB->>EP: Build endpoint (scheme/host/port)
  EP-->>FB: ctx.endpoint
  end

  rect rgb(245,252,240)
  note over FB,S3: PutObject upload
  alt vhost_style_urls = true
    FB->>S3: PUT https://<bucket>.<host>/<object-key>
  else path-style
    FB->>S3: PUT https://<host>/<bucket>/<object-key>
  end
  S3-->>FB: Response
  end
Loading
sequenceDiagram
  autonumber
  participant FB as Fluent Bit S3 Plugin
  participant S3 as S3/OSS Service

  rect rgb(245,252,240)
  note over FB,S3: Multipart upload (no presigned URL)
  alt vhost_style_urls = true
    FB->>S3: POST https://<bucket>.<host>/<key>?uploads=
    S3-->>FB: UploadId
    loop parts
      FB->>S3: PUT https://<bucket>.<host>/<key>?partNumber=N&uploadId=ID
      S3-->>FB: ETag
    end
    FB->>S3: POST https://<bucket>.<host>/<key>?uploadId=ID
  else path-style
    FB->>S3: POST https://<host>/<bucket>/<key>?uploads=
    S3-->>FB: UploadId
    loop parts
      FB->>S3: PUT https://<host>/<bucket>/<key>?partNumber=N&uploadId=ID
      S3-->>FB: ETag
    end
    FB->>S3: POST https://<host>/<bucket>/<key>?uploadId=ID
  end
  S3-->>FB: Complete response
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

A hop to the cloud with a virtual twist,
Buckets on the left—no paths to miss.
Endpoints primed, we bound and twirl,
Multipart dances, give it a whirl.
Tests nibble green, ears held high—
Vhost dreams now multiply. 🐇✨

✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Pre-merge checks

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Linked Issues Check ✅ Passed The PR implements an opt-in vhost-style mode via a new vhost_style_urls config flag, refactors endpoint initialization, and updates PutObject and multipart URI construction (plus tests), which directly addresses the linked issue [#10390] requesting virtual-hosted S3 support. The change provides the requested configuration-driven behavior (instead of automatic detection) so fluent-bit can interoperate with Aliyun OSS and other S3-compatible endpoints that forbid path-style requests. Given the code changes, tests, and example config included in the PR, the coding objectives from the linked issue are satisfied.
Out of Scope Changes Check ✅ Passed All modifications are confined to the S3 plugin files (plugins/out_s3/s3.c, s3.h, s3_multipart.c) and the associated test (tests/runtime/out_s3.c) and are directly related to enabling vhost-style addressing and endpoint handling. There are no edits to unrelated plugins, global configuration subsystems, or other unrelated public APIs beyond the intentionally added vhost_style_urls field. Therefore no out-of-scope changes were detected.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "Support vhost-style S3 URL's" is concise and accurately summarizes the primary change—adding optional support for virtual-host (vhost)-style S3 URLs—matching the code, config, and tests in the changeset. It is specific and suitable as a one-line summary that a reviewer can scan to understand the main intent. The only minor nit is the apostrophe in "URL's," which is grammatically incorrect for a plural.

@miquelruiz miquelruiz changed the title Support vhost-style S3 URL's Support vhost-style S3 URLs Sep 15, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests/runtime/out_s3.c (1)

231-275: Test skips silently when environment variables are missing.

The test returns early without any indication when FLB_OUT_S3_TEST_BUCKET or FLB_OUT_S3_TEST_REGION are not set. This could lead to false positives in test suites.

Consider using TEST_CHECK with a message or flb_warn to indicate the test was skipped:

 bucket = getenv("FLB_OUT_S3_TEST_BUCKET");
 if (bucket == NULL) {
+    TEST_MSG("Test skipped: FLB_OUT_S3_TEST_BUCKET not set");
     return;
 }
plugins/out_s3/s3.c (1)

663-664: Potential memory leak on endpoint reassignment.

When vhost-style URLs are enabled, the code frees the old endpoint only if ctx->free_endpoint == FLB_TRUE. However, ctx->free_endpoint is set to FLB_TRUE earlier in the function (line 628 or 643), so this condition will always be true at this point. This is correct, but the code could be clearer.

Consider simplifying the logic since free_endpoint is always true at this point:

-// Free the old one since we no longer need it
-if (ctx->free_endpoint == FLB_TRUE) {
-    flb_free(ctx->endpoint);
-}
+// Free the old endpoint since we allocated it
+flb_free(ctx->endpoint);
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 60db310 and 7e50f95.

📒 Files selected for processing (4)
  • plugins/out_s3/s3.c (6 hunks)
  • plugins/out_s3/s3.h (1 hunks)
  • plugins/out_s3/s3_multipart.c (3 hunks)
  • tests/runtime/out_s3.c (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
plugins/out_s3/s3_multipart.c (1)
src/flb_sds.c (1)
  • flb_sds_printf (336-387)
tests/runtime/out_s3.c (1)
src/flb_lib.c (9)
  • flb_create (138-220)
  • flb_input (261-271)
  • flb_input_set (300-330)
  • flb_output (274-284)
  • flb_output_set (515-546)
  • flb_start (914-925)
  • flb_lib_push (774-801)
  • flb_stop (942-985)
  • flb_destroy (223-258)
plugins/out_s3/s3.c (5)
src/flb_output.c (1)
  • flb_output_get_property (1096-1099)
src/aws/flb_aws_util.c (2)
  • removeProtocol (165-170)
  • flb_aws_endpoint (75-117)
src/flb_utils.c (2)
  • flb_utils_split (464-467)
  • flb_utils_split_free (477-489)
src/flb_sds.c (3)
  • flb_sds_create_size (92-95)
  • flb_sds_printf (336-387)
  • flb_sds_destroy (389-399)
include/fluent-bit/flb_mem.h (1)
  • flb_free (126-128)
🔇 Additional comments (10)
plugins/out_s3/s3.h (1)

125-125: LGTM! Field placement is appropriate.

The new vhost_style_urls field is correctly placed in the configuration section of the struct, grouped with other connection-related settings like insecure and port.

tests/runtime/out_s3.c (1)

264-265: Good test coverage for the new feature.

The test correctly validates the vhost-style URLs feature by:

  1. Setting vhost_style_urls to "true"
  2. Using use_put_object to test with the simpler API path
  3. Following the existing test pattern
plugins/out_s3/s3_multipart.c (3)

432-438: Consistent vhost-style URL handling in multipart operations.

The implementation correctly handles vhost-style URLs for complete_multipart_upload by:

  • Checking if ctx->vhost_style_urls == FLB_TRUE
  • Constructing the URI without the bucket prefix when vhost-style is enabled
  • Maintaining backward compatibility with path-style URLs

580-584: Consistent implementation across all multipart operations.

The vhost-style URL logic is correctly applied to create_multipart_upload with the same pattern as other operations.


708-716: Complete coverage of multipart operations.

The vhost-style URL support is consistently implemented across all three multipart operations (complete, create, upload_part).

plugins/out_s3/s3.c (5)

589-677: Well-structured endpoint initialization with vhost-style support.

The new init_endpoint function properly:

  1. Handles both custom endpoints and default region-based endpoints
  2. Correctly parses protocol (http/https) and ports
  3. Implements vhost-style URL construction by prepending the bucket name
  4. Properly manages memory with appropriate cleanup

880-884: Clean integration of endpoint initialization.

The endpoint initialization is properly integrated into the plugin initialization flow with appropriate error handling.


1552-1555: Correct URI construction for vhost-style in PutObject.

The implementation properly handles vhost-style URLs in the s3_put_object function:

  • Excludes bucket prefix from URI when vhost-style is enabled
  • Maintains consistent behavior for both random-suffix and standard key formats

Also applies to: 1569-1584


1640-1644: URI display logic correctly handles vhost-style URLs.

The code properly adjusts the displayed object key based on whether vhost-style URLs are in use.


4098-4103: Clear and helpful configuration option.

The new configuration option is well-documented with:

  • Clear description of the feature
  • Sensible default value (false for backward compatibility)
  • Link to AWS documentation for reference

Comment on lines +650 to +674
if (ctx->vhost_style_urls == FLB_TRUE) {
// Add 1 because we need an extra dot
len = strlen(ctx->endpoint) + strlen(ctx->bucket) + 1;
url = flb_sds_create_size(len);
tmp_sds = flb_sds_printf(&url, "%s.%s", ctx->bucket, ctx->endpoint);
if (!tmp_sds) {
flb_sds_destroy(url);
flb_plg_error(ctx->ins, "Could not construct vhost-style S3 endpoint");
return -1;
}
url = tmp_sds;

// Free the old one since we no longer need it
if (ctx->free_endpoint == FLB_TRUE) {
flb_free(ctx->endpoint);
}

ctx->endpoint = flb_strndup(url, flb_sds_len(url));
flb_sds_destroy(url);
if (ctx->endpoint == NULL) {
flb_plg_error(ctx->ins, "error duplicating endpoint string");
return -1;
}
flb_plg_info(ctx->ins, "New endpoint: %s", ctx->endpoint);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify bucket name compatibility with vhost-style URLs.

When using vhost-style URLs, bucket names must be DNS-compliant. The code should validate that the bucket name doesn't contain characters that would make it invalid as a DNS hostname (e.g., underscores, uppercase letters).

Consider adding validation for DNS-compliant bucket names when vhost-style URLs are enabled:

if (ctx->vhost_style_urls == FLB_TRUE) {
    // Validate bucket name is DNS-compliant
    // - Must be 3-63 characters
    // - Must start and end with lowercase letter or number
    // - Can contain lowercase letters, numbers, and hyphens
    // - Cannot contain consecutive periods or be formatted as IP address
}

🏁 Script executed:

#!/bin/bash
# Check if there's existing bucket name validation in the codebase
rg -n "bucket.*valid|valid.*bucket" --type c
# Look for DNS validation patterns
rg -n "DNS|dns.*compliant|hostname.*valid" --type c
# Check for bucket name restrictions in documentation
fd README -e md -x grep -l "bucket.*name.*restriction\|DNS"

Length of output: 107459


Validate bucket name before constructing vhost-style endpoint (DNS‑compliant).

plugins/out_s3/s3.c (around lines 650–674) currently concatenates ctx->bucket and ctx->endpoint into a hostname without validating the bucket. Add a DNS-compatibility check and fall back to path-style (or return an error) when the bucket is not DNS-compatible. Enforce: 3–63 chars; only lowercase letters, digits, hyphen and dot; start/end with letter or digit; no consecutive dots; not IPv4-formatted. A practical check: regex ^[a-z0-9][a-z0-9.-]{1,61}[a-z0-9]$ AND reject /(\d+.){3}\d+/ and '..' before using vhost-style formatting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

S3 output plugin getting SecondLevelDomainForbidden when used with Aliyun OSS
1 participant