Skip to content

Hive Gateway v2 documentation and migration #6817

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 30 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
afd0756
start with migration
enisdenjo May 21, 2025
120d41f
move to migration guides
enisdenjo May 22, 2025
4ca1999
logger
enisdenjo May 22, 2025
dc88513
sh npm2yarn
enisdenjo May 22, 2025
dbad5a9
documentation and more migration
enisdenjo May 22, 2025
2cd622a
more migration
enisdenjo May 22, 2025
e59fb34
no color
enisdenjo May 22, 2025
ae5b509
daily log writer
enisdenjo May 22, 2025
94a7a0f
also link
enisdenjo May 22, 2025
fde31e6
logger writer flush
enisdenjo Jun 23, 2025
7b215e9
logtape writer
enisdenjo Jun 23, 2025
0a949af
no forking
enisdenjo Jun 30, 2025
11102c6
no node 18 and some fixes
enisdenjo Jun 30, 2025
0f0ec52
no multipart
enisdenjo Jun 30, 2025
362bc4a
no mocking
enisdenjo Jun 30, 2025
a278318
actual link
enisdenjo Jun 30, 2025
6680e83
docs(opentelemetry): Update documentation for Hive Gateway v2 (#6852)
EmrysMyrddin Jul 1, 2025
b0e48d5
docs(gateway): add migration for subgraph name of execution requests …
EmrysMyrddin Jul 17, 2025
5aab345
cleanup
enisdenjo Jul 17, 2025
d81717e
docs(opentelemetry): Add new CLI options documentation (#6877)
EmrysMyrddin Jul 17, 2025
cf318ff
a bit better
enisdenjo Jul 17, 2025
8650c1c
typos and missing comma
enisdenjo Jul 17, 2025
4097011
root context
enisdenjo Jul 17, 2025
9df62c6
new lines for visibility
enisdenjo Jul 17, 2025
7051fc7
root context
enisdenjo Jul 17, 2025
bc439bb
supeRgraph
enisdenjo Jul 17, 2025
abfc701
example in example page, less migration
enisdenjo Jul 17, 2025
19844c9
match order like rest
enisdenjo Jul 17, 2025
d280e66
not removed
enisdenjo Jul 17, 2025
951f52e
recommend hive access token and target
enisdenjo Jul 18, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions packages/web/docs/src/content/_meta.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ export default {
'high-availability-cdn': 'High-Availability CDN',
dashboard: 'Dashboard',
gateway: 'Gateway',
logger: 'Logger',
management: 'Management',
'other-integrations': 'Other Integrations',
'api-reference': 'CLI/API Reference',
Expand Down
123 changes: 91 additions & 32 deletions packages/web/docs/src/content/api-reference/gateway-cli.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,51 +19,105 @@ hive-gateway --help

which will print out the following:

{/* IMPORTANT: please dont forget to update the following when arguments change. simply run `node --import tsx packages/hive-gateway/src/bin.ts --help` and copy over the text */}
{/* IMPORTANT: please dont forget to update the following when arguments change. simply run `node --import tsx packages/gateway/src/bin.ts --help` and copy over the text */}

```
Usage: hive-gateway [options] [command]

Federated GraphQL Gateway
Unify and accelerate your data graph across diverse services with Hive Gateway, which seamlessly
integrates with Apollo Federation.

Options:
--fork <count> count of workers to spawn. uses "24" (available parallelism) workers when NODE_ENV is "production",
otherwise "1" (the main) worker (default: 1) (env: FORK)
-c, --config-path <path> path to the configuration file. defaults to the following files respectively in the current working
directory: gateway.ts, gateway.mts, gateway.cts, gateway.js, gateway.mjs, gateway.cjs (env:
CONFIG_PATH)
--fork <number> number of workers to spawn. (default: 1) (env:
FORK)
-c, --config-path <path> path to the configuration file. defaults to
the following files respectively in the
current working directory: gateway.ts,
gateway.mts, gateway.cts, gateway.js,
gateway.mjs, gateway.cjs (env: CONFIG_PATH)
-h, --host <hostname> host to use for serving (default: 0.0.0.0)
-p, --port <number> port to use for serving (default: 4000) (env: PORT)
--polling <duration> schema polling interval in human readable duration (default: 10s) (env: POLLING)
-p, --port <number> port to use for serving (default: 4000) (env:
PORT)
--polling <duration> schema polling interval in human readable
duration (default: 10s) (env: POLLING)
--no-masked-errors don't mask unexpected errors in responses
--masked-errors mask unexpected errors in responses (default: true)
--hive-usage-target <target> Hive registry target to which the usage data should be reported to. requires the
"--hive-usage-access-token <token>" option (env: HIVE_USAGE_TARGET)
--hive-usage-access-token <token> Hive registry access token for usage metrics reporting. requires the "--hive-usage-target <target>"
option (env: HIVE_USAGE_ACCESS_TOKEN)
--hive-persisted-documents-endpoint <endpoint> [EXPERIMENTAL] Hive CDN endpoint for fetching the persisted documents. requires the
"--hive-persisted-documents-token <token>" option
--hive-persisted-documents-token <token> [EXPERIMENTAL] Hive persisted documents CDN endpoint token. requires the
"--hive-persisted-documents-endpoint <endpoint>" option
--hive-cdn-endpoint <endpoint> Hive CDN endpoint for fetching the schema (env: HIVE_CDN_ENDPOINT)
--hive-cdn-key <key> Hive CDN API key for fetching the schema. implies that the "schemaPathOrUrl" argument is a url (env:
HIVE_CDN_KEY)
--apollo-graph-ref <graphRef> Apollo graph ref of the managed federation graph (<YOUR_GRAPH_ID>@<VARIANT>) (env: APOLLO_GRAPH_REF)
--apollo-key <apiKey> Apollo API key to use to authenticate with the managed federation up link (env: APOLLO_KEY)
--masked-errors mask unexpected errors in responses (default:
true)
--opentelemetry [exporter-endpoint] Enable OpenTelemetry integration with an
exporter using this option's value as
endpoint. By default, it uses OTLP HTTP, use
"--opentelemetry-exporter-type" to change the
default. (env: OPENTELEMETRY)
--opentelemetry-exporter-type <type> OpenTelemetry exporter type to use when
setting up OpenTelemetry integration. Requires
"--opentelemetry" to set the endpoint.
(choices: "otlp-http", "otlp-grpc", default:
"otlp-http", env: OPENTELEMETRY_EXPORTER_TYPE)
--hive-registry-token <token> [DEPRECATED] please use "--hive-target" and
"--hive-access-token" (env:
HIVE_REGISTRY_TOKEN)
--hive-usage-target <target> [DEPRECATED] please use --hive-target instead.
(env: HIVE_USAGE_TARGET)
--hive-target <target> Hive registry target to which the usage and
tracing data should be reported to. Requires
either "--hive-access-token <token>",
"--hive-usage-access-token <token>" or
"--hive-trace-access-token" option (env:
HIVE_TARGET)
--hive-access-token <token> Hive registry access token for usage metrics
reporting and tracing. Enables both usage
reporting and tracing. Requires the
"--hive-target <target>" option (env:
HIVE_ACCESS_TOKEN)
--hive-usage-access-token <token> Hive registry access token for usage
reporting. Enables Hive usage report. Requires
the "--hive-target <target>" option. It can't
be used together with "--hive-access-token"
(env: HIVE_USAGE_ACCESS_TOKEN)
--hive-trace-access-token <token> Hive registry access token for tracing.
Enables Hive tracing. Requires the
"--hive-target <target>" option. It can't be
used together with "--hive-access-token" (env:
HIVE_TRACE_ACCESS_TOKEN)
--hive-trace-endpoint <endpoint> Hive registry tracing endpoint. (default:
"https://api.graphql-hive.com/otel/v1/traces",
env: HIVE_TRACE_ENDPOINT)
--hive-persisted-documents-endpoint <endpoint> [EXPERIMENTAL] Hive CDN endpoint for fetching
the persisted documents. Requires the
"--hive-persisted-documents-token <token>"
option
--hive-persisted-documents-token <token> [EXPERIMENTAL] Hive persisted documents CDN
endpoint token. Requires the
"--hive-persisted-documents-endpoint
<endpoint>" option
--hive-cdn-endpoint <endpoint> Hive CDN endpoint for fetching the schema
(env: HIVE_CDN_ENDPOINT)
--hive-cdn-key <key> Hive CDN API key for fetching the schema.
implies that the "schemaPathOrUrl" argument is
a url (env: HIVE_CDN_KEY)
--apollo-graph-ref <graphRef> Apollo graph ref of the managed federation
graph (<YOUR_GRAPH_ID>@<VARIANT>) (env:
APOLLO_GRAPH_REF)
--apollo-key <apiKey> Apollo API key to use to authenticate with the
managed federation up link (env: APOLLO_KEY)
--disable-websockets Disable WebSockets support
--jit Enable Just-In-Time compilation of GraphQL documents (env: JIT)
--jit Enable Just-In-Time compilation of GraphQL
documents (env: JIT) (env: JIT)
-V, --version output the version number
--help display help for command

Commands:
supergraph [options] [schemaPathOrUrl] serve a Federation supergraph provided by a compliant composition tool such as Mesh Compose or Apollo
Rover
subgraph [schemaPathOrUrl] serve a Federation subgraph that can be used with any Federation compatible router like Apollo
Router/Gateway
proxy [options] [endpoint] serve a proxy to a GraphQL API and add additional features such as monitoring/tracing, caching, rate
limiting, security, and more
supergraph [options] [schemaPathOrUrl] serve a Federation supergraph provided by a
compliant composition tool such as Mesh
Compose or Apollo Rover
subgraph [schemaPathOrUrl] serve a Federation subgraph that can be used
with any Federation compatible router like
Apollo Router/Gateway
proxy [options] [endpoint] serve a proxy to a GraphQL API and add
additional features such as
monitoring/tracing, caching, rate limiting,
security, and more
help [command] display help for command

```

All arguments can also be configured in the config file.
Expand All @@ -79,7 +133,12 @@ configuration file if you provide these environment variables.

- `HIVE_CDN_ENDPOINT`: The endpoint of the Hive Registry CDN
- `HIVE_CDN_KEY`: The API key provided by Hive Registry to fetch the schema
- `HIVE_REGISTRY_TOKEN`: The token to push the metrics to Hive Registry
- `HIVE_TARGET`: The target for usage reporting and observability in Hive Console
- `HIVE_USAGE_TARGET` (deprecated, use `HIVE_TARGET`): The target for usage reporting and
observability in Hive Console
- `HIVE_ACCESS_TOKEN`: The access token used for usage reporting and observability in Hive Console
- `HIVE_USAGE_ACCESS_TOKEN`: The access token used for usage reporting only in Hive Console
- `HIVE_TRACE_ACCESS_TOKEN`: The access token used for observability only in Hive Console

[Learn more about Hive Registry integration here](/docs/gateway/supergraph-proxy-source)

Expand Down
48 changes: 48 additions & 0 deletions packages/web/docs/src/content/api-reference/gateway-config.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -451,6 +451,54 @@ different phases of the GraphQL execution to manipulate or track the entire work

[See dedicated plugins feature page for more information](/docs/gateway/other-features/custom-plugins)

### `openTelemetry`

This options allows to enable OpenTelemetry integration and customize its behaviour.

[See dedicated Monitoring/Tracing fearure page for more information](/docs/gateway/monitoring-tracing)

#### `useContextManager`

Use standard `@opentelemetry/api` Context Manager to keep track of current span. This is an advanced
option that should be used carefully, as it can break your custom plugin spans.

#### `inheritContext`

If true (the default), the HTTP span will be created with the active span as parent. If false, the
HTTP span will always be a root span, which will create it's own trace for each request.

#### `propagateContext`

If true (the default), uses the registered propagators to propagate the active context to upstream
services.

#### `configureDiagLogger`

If true (the default), setup the standard `@opentelemetry/api` diag API to use the Hive Gatewat
logger. A child logger is created with the prefix `[opentelemetry][diag] `.

#### `flushOnDispose`

If truthy (the default), the registered span processor will be forcefully flushed when the Hive
Gateway is about to shutdown. To flush, the `forceFlush` method is called (if it exists), but you
can change the method to call by providing a string as a value to this option.

#### `traces`

Pass `true` to enable tracing integration with all spans available.

This option can also be an object for more fine grained configuration.

##### `tracer`

The `Tracer` instance to be used. The default is a tracer with the name `gateway`

#### `spans`

An object with each keys being a span name, and the value being either a boolean or a filtering
function to control which span should be reported.
[See Reported Spans and Events for details](/docs/gateway/monitoring-tracing#reported-spans).

### `cors`

[See dedicated CORS feature page for more information](/docs/gateway/other-features/security/cors)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,14 @@ So you can benefit from the powerful plugins of Fastify ecosystem with Hive Gate

## Example

In order to connect Fastify's logger to the gateway, you need to install the
`@graphql-hive/logger-pino` package together with `@graphql-hive/gateway-runtime` and `fastify`.

```sh npm2yarn
npm i @graphql-hive/gateway-runtime @graphql-hive/logger-pino fastify
npm i @graphql-hive/gateway-runtime @graphql-hive/logger fastify
```

```ts
import fastify, { type FastifyReply, type FastifyRequest } from 'fastify'
import { createGatewayRuntime } from '@graphql-hive/gateway-runtime'
import { createLoggerFromPino } from '@graphql-hive/logger-pino'
import { createGatewayRuntime, Logger } from '@graphql-hive/gateway-runtime'
import { PinoLogWriter } from '@graphql-hive/logger/writers/pino'

// Request ID header used for tracking requests
const requestIdHeader = 'x-request-id'
Expand All @@ -52,8 +49,10 @@ interface FastifyContext {
}

const gateway = createGatewayRuntime<FastifyContext>({
// Integrate Fastify's logger / Pino with the gateway logger
logging: createLoggerFromPino(app.log),
// Use Fastify's logger (Pino) with Hive Logger
logging: new Logger({
writers: [new PinoLogWriter(app.log)]
}),
// Align with Fastify
requestId: {
// Use the same header name as Fastify
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ You can then generate the supergraph file using the `mesh-compose` CLI from
npx mesh-compose supergraph
```

#### Compose supegraph with Apollo Rover
#### Compose supergraph with Apollo Rover

Apollo Rover only allow to export supegergraph as a GraphQL document, so we will have to wrap this
output into a JavaScript file:
Expand Down
Loading
Loading