Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
263 changes: 261 additions & 2 deletions Documentation/resources.adoc

Large diffs are not rendered by default.

274 changes: 248 additions & 26 deletions Documentation/resources.md

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions assets/alertmanager/service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ metadata:
* Port 9094 provides access to all the Alertmanager endpoints. Granting access requires binding a user to the `monitoring-alertmanager-view` role (for read-only operations) or `monitoring-alertmanager-edit` role in the `openshift-monitoring` project.
xx_omitted_before_deploy__test_file_name:openshift-monitoring_alertmanager-main_service_port_9094.yaml
* Port 9092 provides access to the Alertmanager endpoints restricted to a given project. Granting access requires binding a user to the `monitoring-rules-edit` cluster role or `monitoring-edit` cluster role in the project.
xx_omitted_before_deploy__test_file_name:openshift-monitoring_alertmanager-main_service_port_9092.yaml
* Port 9097 provides access to the `/metrics` endpoint only. This port is for internal use, and no other usage is guaranteed.
service.beta.openshift.io/serving-cert-secret-name: alertmanager-main-tls
labels:
Expand Down
3 changes: 2 additions & 1 deletion assets/prometheus-k8s/service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@ metadata:
annotations:
openshift.io/description: |-
Expose the Prometheus web server within the cluster on the following ports:
* Port 9091 provides access to all the Prometheus endpoints. Granting access requires binding a user to the `cluster-monitoring-view` cluster role.
* Port 9091 provides access to all the Prometheus endpoints. Granting access requires binding a user to the `cluster-monitoring-view` cluster role or `cluster-monitoring-metrics-api` cluster role in the `openshift-monitoring` project.
xx_omitted_before_deploy__test_file_name:openshift-monitoring_prometheus-k8s_service_port_9091.yaml
* Port 9092 provides access to the `/metrics` and `/federate` endpoints only. This port is for internal use, and no other usage is guaranteed.
service.beta.openshift.io/serving-cert-secret-name: prometheus-k8s-tls
labels:
Expand Down
5 changes: 4 additions & 1 deletion assets/thanos-querier/service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,12 @@ metadata:
annotations:
openshift.io/description: |-
Expose the Thanos Querier web server within the cluster on the following ports:
* Port 9091 provides access to all the Thanos Querier endpoints. Granting access requires binding a user to the `cluster-monitoring-view` cluster role.
* Port 9091 provides access to all the Thanos Querier endpoints. Granting access requires binding a user to the `cluster-monitoring-view` cluster role or `cluster-monitoring-metrics-api` cluster role in the `openshift-monitoring` project.
xx_omitted_before_deploy__test_file_name:openshift-monitoring_thanos-querier_service_port_9091.yaml
* Port 9092 provides access to the `/api/v1/query`, `/api/v1/query_range/`, `/api/v1/labels`, `/api/v1/label/*/values`, and `/api/v1/series` endpoints restricted to a given project. Granting access requires binding a user to the `view` cluster role in the project.
xx_omitted_before_deploy__test_file_name:openshift-monitoring_thanos-querier_service_port_9092.yaml
* Port 9093 provides access to the `/api/v1/alerts`, and `/api/v1/rules` endpoints restricted to a given project. Granting access requires binding a user to the `monitoring-rules-edit` cluster role or `monitoring-edit` cluster role or `monitoring-rules-view` cluster role in the project.
xx_omitted_before_deploy__test_file_name:openshift-monitoring_thanos-querier_service_port_9093.yaml
* Port 9094 provides access to the `/metrics` endpoint only. This port is for internal use, and no other usage is guaranteed.
service.beta.openshift.io/serving-cert-secret-name: thanos-querier-tls
labels:
Expand Down
1 change: 0 additions & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ require (
github.com/go-openapi/strfmt v0.23.0
github.com/google/uuid v1.6.0
github.com/imdario/mergo v0.3.16
github.com/mattn/go-shellwords v1.0.12
github.com/onsi/ginkgo/v2 v2.22.0
github.com/onsi/gomega v1.36.1
github.com/openshift-eng/openshift-tests-extension v0.0.0-20250702172817-97309544869d
Expand Down
2 changes: 0 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -267,8 +267,6 @@ github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxec
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-shellwords v1.0.12 h1:M2zGm7EW6UQJvDeQxo4T51eKPurbeFbe8WtebGE2xrk=
github.com/mattn/go-shellwords v1.0.12/go.mod h1:EZzvwXDESEeg03EKmM+RmDnNOPKG4lLtQsUlTZDWQ8Y=
github.com/miekg/dns v1.1.65 h1:0+tIPHzUW0GCge7IiK3guGP57VAw7hoPDfApjkMD1Fc=
github.com/miekg/dns v1.1.65/go.mod h1:Dzw9769uoKVaLuODMDZz9M6ynFU6Em65csPuoi8G0ck=
github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible h1:aKW/4cBs+yK6gpqU3K/oIwk9Q/XICqd3zOX/UFuvqmk=
Expand Down
5 changes: 1 addition & 4 deletions hack/docgen/managed_resources.go
Original file line number Diff line number Diff line change
Expand Up @@ -248,10 +248,7 @@ func substitutePlaceholdersInDescription(desc, format string) (string, error) {
} else {
content = suite.StringMarkdown()
}
// TODO: remove once unnecessary
if content != "" {
lines = append(lines, content)
}
lines = append(lines, content)
}
if err := scanner.Err(); err != nil {
return "", err
Expand Down
2 changes: 2 additions & 0 deletions jsonnet/components/alertmanager.libsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -71,13 +71,15 @@ function(params)
* Port %d provides access to all the Alertmanager endpoints. %s
%s
* Port %d provides access to the Alertmanager endpoints restricted to a given project. %s
%s
* Port %d provides access to the `/metrics` endpoint only. This port is for internal use, and no other usage is guaranteed.
||| % [
$.service.spec.ports[0].port,
requiredRoles([['monitoring-alertmanager-view', 'for read-only operations'], 'monitoring-alertmanager-edit'], 'openshift-monitoring'),
testFilePlaceholder('openshift-monitoring', 'alertmanager-main', $.service.spec.ports[0].port),
$.service.spec.ports[1].port,
requiredClusterRoles(['monitoring-rules-edit', 'monitoring-edit'], false, ''),
testFilePlaceholder('openshift-monitoring', 'alertmanager-main', $.service.spec.ports[1].port),
$.service.spec.ports[2].port,
],
),
Expand Down
5 changes: 4 additions & 1 deletion jsonnet/components/prometheus.libsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ local generateSecret = import '../utils/generate-secret.libsonnet';
local prometheus = import 'github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus/components/prometheus.libsonnet';
local withDescription = (import '../utils/add-annotations.libsonnet').withDescription;
local requiredClusterRoles = (import '../utils/add-annotations.libsonnet').requiredClusterRoles;
local testFilePlaceholder = (import '../utils/add-annotations.libsonnet').testFilePlaceholder;

function(params)
local cfg = params;
Expand Down Expand Up @@ -103,10 +104,12 @@ function(params)
|||
Expose the Prometheus web server within the cluster on the following ports:
* Port %d provides access to all the Prometheus endpoints. %s
%s
* Port %d provides access to the `/metrics` and `/federate` endpoints only. This port is for internal use, and no other usage is guaranteed.
||| % [
$.service.spec.ports[0].port,
requiredClusterRoles(['cluster-monitoring-view'], true),
requiredClusterRoles(['cluster-monitoring-view', 'cluster-monitoring-metrics-api'], false, 'openshift-monitoring'),
testFilePlaceholder('openshift-monitoring', 'prometheus-k8s', $.service.spec.ports[0].port),
$.service.spec.ports[1].port,
],
),
Expand Down
9 changes: 8 additions & 1 deletion jsonnet/components/thanos-querier.libsonnet
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
local generateSecret = import '../utils/generate-secret.libsonnet';
local querier = import 'github.com/thanos-io/kube-thanos/jsonnet/kube-thanos/kube-thanos-query.libsonnet';
local withDescription = (import '../utils/add-annotations.libsonnet').withDescription;
local testFilePlaceholder = (import '../utils/add-annotations.libsonnet').testFilePlaceholder;
local requiredRoles = (import '../utils/add-annotations.libsonnet').requiredRoles;
local requiredClusterRoles = (import '../utils/add-annotations.libsonnet').requiredClusterRoles;

Expand Down Expand Up @@ -199,16 +200,22 @@ function(params)
|||
Expose the Thanos Querier web server within the cluster on the following ports:
* Port %d provides access to all the Thanos Querier endpoints. %s
%s
* Port %d provides access to the `/api/v1/query`, `/api/v1/query_range/`, `/api/v1/labels`, `/api/v1/label/*/values`, and `/api/v1/series` endpoints restricted to a given project. %s
%s
* Port %d provides access to the `/api/v1/alerts`, and `/api/v1/rules` endpoints restricted to a given project. %s
%s
* Port %d provides access to the `/metrics` endpoint only. This port is for internal use, and no other usage is guaranteed.
||| % [
$.service.spec.ports[0].port,
requiredClusterRoles(['cluster-monitoring-view'], true),
requiredClusterRoles(['cluster-monitoring-view', 'cluster-monitoring-metrics-api'], false, 'openshift-monitoring'),
testFilePlaceholder('openshift-monitoring', 'thanos-querier', $.service.spec.ports[0].port),
$.service.spec.ports[1].port,
requiredClusterRoles(['view'], false, ''),
testFilePlaceholder('openshift-monitoring', 'thanos-querier', $.service.spec.ports[1].port),
$.service.spec.ports[2].port,
requiredClusterRoles(['monitoring-rules-edit', 'monitoring-edit', 'monitoring-rules-view'], false, ''),
testFilePlaceholder('openshift-monitoring', 'thanos-querier', $.service.spec.ports[2].port),
$.service.spec.ports[3].port,
],
),
Expand Down
2 changes: 1 addition & 1 deletion jsonnet/utils/add-annotations.libsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@
if clusterRoleBinding then
s + '.'
else if namespace != '' then
s + ' in the `%s` project.'
s + ' in the `%s` project.' % namespace
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is needed for the PR

else
s + ' in the project.',

Expand Down
158 changes: 119 additions & 39 deletions test/e2e/doc_examples_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,63 +15,143 @@
package e2e

import (
"context"
"fmt"
"hash/fnv"
"os"
"path/filepath"
"strconv"
"testing"
"time"

"github.com/openshift/cluster-monitoring-operator/test/e2e/framework"
"github.com/openshift/cluster-monitoring-operator/test/e2e/test_command"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

const (
testNamespace = "test-doc-examples-in-cluster"
serviceAccount = "tester"
clusterRoleBinding = "tester"
)

func toPodName(testName string) string {
h := fnv.New64()
h.Write([]byte(testName))
return "test-" + strconv.FormatUint(h.Sum64(), 32)
}

func setupEnv(t *testing.T) {
cleanupNS, err := f.CreateNamespace(testNamespace)
require.NoError(t, err)
t.Cleanup(func() {
require.NoError(t, cleanupNS())
})

cleanupSA, err := f.CreateServiceAccount(testNamespace, serviceAccount)
require.NoError(t, err)
t.Cleanup(func() {
require.NoError(t, cleanupSA())
})

cleanupBinding, err := f.CreateClusterRoleBinding(testNamespace, clusterRoleBinding, "cluster-admin")
require.NoError(t, err)
t.Cleanup(func() {
require.NoError(t, cleanupBinding())
})
}

func TestDocExamples(t *testing.T) {
filesDir := "test_command/scripts/"
tempDir := t.TempDir()
kubeConfigPath := f.KubeConfigPath

entries, err := os.ReadDir(filesDir)
scripts, err := os.ReadDir(filesDir)
require.NoError(t, err)
// In case there is a wiring issue.
require.Greater(t, len(entries), 0)

for _, entry := range entries {
file, err := os.Open(filepath.Join(filesDir, entry.Name()))
require.NoError(t, err)
defer file.Close()

var suite test_command.Suite
decoder := yaml.NewDecoder(file)
decoder.KnownFields(true)
err = decoder.Decode(&suite)
require.NoError(t, err)

for _, test := range suite.Tests {
// TODO: run in //
t.Run(entry.Name(), func(t *testing.T) {
// Set up cleaners
t.Cleanup(func() {
for _, c := range test.TearDown {
c.Run(t, tempDir, kubeConfigPath)
require.Greater(t, len(scripts), 3)
setupEnv(t)

for _, script := range scripts {
t.Run(script.Name(), func(t *testing.T) {
t.Parallel()
file, err := os.Open(filepath.Join(filesDir, script.Name()))
require.NoError(t, err)
defer file.Close()

var suite test_command.Suite
decoder := yaml.NewDecoder(file)
decoder.KnownFields(true)
require.NoError(t, decoder.Decode(&suite))

for i, test := range suite.Tests {
// Run the script inside a Pod as some of the endpoints are not exposed by default.
t.Run(fmt.Sprintf("test-%d", i), func(t *testing.T) {
t.Parallel()
t.Cleanup(func() {
test_command.RunScript(t, test.TearDown, tempDir, kubeConfigPath)
})

ctx := context.Background()
podName := toPodName(t.Name())
containerName := "test"
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: podName,
Namespace: testNamespace,
},
Spec: corev1.PodSpec{
ServiceAccountName: serviceAccount,
RestartPolicy: corev1.RestartPolicyNever,
Containers: []corev1.Container{
{
Name: containerName,
Image: "registry.redhat.io/openshift4/ose-cli:latest",
ImagePullPolicy: corev1.PullIfNotPresent,
Command: []string{"bash", "-c", test.Script},
SecurityContext: &corev1.SecurityContext{
Capabilities: &corev1.Capabilities{
Drop: []corev1.Capability{"ALL"},
},
SeccompProfile: &corev1.SeccompProfile{
Type: corev1.SeccompProfileTypeRuntimeDefault,
},
},
},
},
},
}
})

// Setup
envVars := map[string]string{}
for _, setup := range test.SetUp {
require.NoError(t, setup.Run(t, tempDir, kubeConfigPath))
if setup.EnvVarValue() == "" {
continue
pod, err := f.KubeClient.CoreV1().Pods(testNamespace).Create(ctx, pod, metav1.CreateOptions{})
require.NoError(t, err)
t.Cleanup(func() {
err := f.KubeClient.CoreV1().Pods(testNamespace).Delete(context.Background(), podName, metav1.DeleteOptions{})
require.NoError(t, err)
})

err = framework.Poll(time.Second, time.Minute, func() error {
pod, err = f.KubeClient.CoreV1().Pods(testNamespace).Get(ctx, podName, metav1.GetOptions{})
if err != nil {
return err
}
if pod.Status.Phase != corev1.PodSucceeded && pod.Status.Phase != corev1.PodFailed {
return fmt.Errorf("waiting for pod")
}
return nil
})
require.NoError(t, err)

if pod.Status.Phase != corev1.PodSucceeded {
l, err := f.GetLogs(testNamespace, podName, containerName)
require.NoError(t, err)
t.Log(l)
require.Fail(t, "pod failed to execute script")
}
// Check duplicated env vars.
require.NotContains(t, envVars, setup.EnvVar)
envVars[setup.EnvVar] = setup.EnvVarValue()
}

// Run the checks
for _, g := range test.Checks {
require.NoError(t, g.Run(t, tempDir, kubeConfigPath, envVars))
}
})
}
})
}
})
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
tests:
- script: |
# The following example exercises permissions granted by the monitoring-rules-edit Cluster Role.
# The binding commands are supposed to run by a user with the necessary privileges.
oc create namespace test-alertmanager-tenancy-monitoring-rules-edit
oc create serviceaccount am-client --namespace=test-alertmanager-tenancy-monitoring-rules-edit
# The binding is done to a Service Account, but it can also be applied to any other user.
oc create rolebinding test-alertmanager-tenancy-monitoring-rules-edit \
--namespace=test-alertmanager-tenancy-monitoring-rules-edit \
--clusterrole=monitoring-rules-edit \
--serviceaccount=test-alertmanager-tenancy-monitoring-rules-edit:am-client
# The token can then be used to access the endpoints on the port.
TOKEN=$(oc create token am-client --namespace=test-alertmanager-tenancy-monitoring-rules-edit)
# Because the port is not exposed by default, the endpoint is assumed to be accessed from within the cluster.
curl -k -f -H "Authorization: Bearer $TOKEN" "https://alertmanager-main.openshift-monitoring:9092/api/v2/alerts?namespace=test-alertmanager-tenancy-monitoring-rules-edit"
curl -k -X POST -f "https://alertmanager-main.openshift-monitoring:9092/api/v2/silences?namespace=test-alertmanager-tenancy-monitoring-rules-edit" \
-H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
-d '{
"matchers": [
{
"name": "alertname",
"value": "MyTestAlert",
"isRegex": false
}
],
"startsAt": "2044-01-01T00:00:00Z",
"endsAt": "2044-01-01T00:00:01Z",
"createdBy": "test-alertmanager-tenancy-monitoring-rules-edit/am-client",
"comment": "Silence test"
}'
tearDown: |
oc delete namespace test-alertmanager-tenancy-monitoring-rules-edit --wait=false
- script: |
# The following example exercises permissions granted by the monitoring-edit Cluster Role.
# The binding commands are supposed to run by a user with the necessary privileges.
oc create namespace test-alertmanager-tenancy-monitoring-edit
oc create serviceaccount am-client --namespace=test-alertmanager-tenancy-monitoring-edit
# The binding is done to a Service Account, but it can also be applied to any other user.
oc create rolebinding test-alertmanager-tenancy-monitoring-edit \
--namespace=test-alertmanager-tenancy-monitoring-edit \
--clusterrole=monitoring-edit \
--serviceaccount=test-alertmanager-tenancy-monitoring-edit:am-client
# The token can then be used to access the endpoints on the port.
TOKEN=$(oc create token am-client --namespace=test-alertmanager-tenancy-monitoring-edit)
# Because the port is not exposed by default, the endpoint is assumed to be accessed from within the cluster.
curl -k -f -H "Authorization: Bearer $TOKEN" "https://alertmanager-main.openshift-monitoring:9092/api/v2/alerts?namespace=test-alertmanager-tenancy-monitoring-edit"
curl -k -X POST -f "https://alertmanager-main.openshift-monitoring:9092/api/v2/silences?namespace=test-alertmanager-tenancy-monitoring-edit" \
-H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
-d '{
"matchers": [
{
"name": "alertname",
"value": "MyTestAlert",
"isRegex": false
}
],
"startsAt": "2044-01-01T00:00:00Z",
"endsAt": "2044-01-01T00:00:01Z",
"createdBy": "test-alertmanager-tenancy-monitoring-edit/am-client",
"comment": "Silence test"
}'
tearDown: |
oc delete namespace test-alertmanager-tenancy-monitoring-edit --wait=false
Loading