Skip to content

Conversation

@JasonPowr
Copy link
Member

@JasonPowr JasonPowr commented Nov 28, 2025

Summary by Sourcery

Enable FIPS support and validation for the operator and its components, and extend CI/CD configuration to exercise FIPS checks and image mirroring.

New Features:

  • Add FIPS-focused end-to-end test suite to verify all Securesign components and related jobs run in FIPS mode.
  • Introduce an ImageDigestMirrorSet manifest to support mirroring of product images from registry.redhat.io to quay.io.

Enhancements:

  • Mark the operator bundle and ClusterServiceVersion as FIPS-compliant in bundle and CSV metadata.
  • Update TUF and CTLog test helpers to more robustly locate server pods and allow longer startup time in E2E verification.

Build:

  • Enable the fips-check parameter in Tekton operator and bundle pipelines for both push and pull request workflows, including retest-all-comment triggers.

Deployment:

  • Add image mirror configuration for core Securesign images via an ImageDigestMirrorSet resource.

Tests:

  • Add comprehensive FIPS E2E tests that install the operator into a FIPS-enabled cluster and assert FIPS mode for all major services, jobs, and the controller manager.

@sourcery-ai
Copy link

sourcery-ai bot commented Nov 28, 2025

Reviewer's Guide

Enables FIPS support and validation for the rhtas operator by turning on FIPS flags/labels in pipelines and manifests, adding an ImageDigestMirrorSet, tightening some E2E helpers, and introducing a dedicated FIPS E2E test suite that verifies all components run with FIPS enabled.

Sequence diagram for PR validation with FIPS checks and retest-all-comment

sequenceDiagram
  actor Developer
  participant GitHub
  participant PipelinesAsCode
  participant TektonPRPipeline as Tekton_PR_Pipeline
  participant OpenShiftCluster as OpenShift_Cluster
  participant FIPSE2ESuite as FIPS_E2E_Suite

  Developer->>GitHub: Open PR or push commits
  GitHub-->>PipelinesAsCode: PR event (pull_request)
  PipelinesAsCode->>TektonPRPipeline: Start PR pipeline
  TektonPRPipeline->>TektonPRPipeline: Set param manager-pipelinerun-selector
  TektonPRPipeline->>TektonPRPipeline: Set param fips-check=true
  TektonPRPipeline->>OpenShiftCluster: Build and deploy test operator
  TektonPRPipeline->>FIPSE2ESuite: Run tests gated by fips-check
  FIPSE2ESuite->>OpenShiftCluster: Inspect components for FIPS mode
  FIPSE2ESuite-->>TektonPRPipeline: Report FIPS validation result

  Developer->>GitHub: Add comment /retest-all-comment
  GitHub-->>PipelinesAsCode: Comment event (retest-all-comment)
  PipelinesAsCode->>TektonPRPipeline: Rerun PR pipeline
  TektonPRPipeline->>TektonPRPipeline: Match event-type in (pull_request,incoming,retest-all-comment)
  TektonPRPipeline->>FIPSE2ESuite: Re-run FIPS E2E tests
  FIPSE2ESuite-->>Developer: Result via PR status checks
Loading

File-Level Changes

Change Details Files
Adjust TAS TUF and CTLog E2E helpers to be more robust against multiple pods and slow startup.
  • Change TUF GetServerPod helper to return the first pod without a job-name label instead of assuming exactly one pod exists.
  • Increase CTLog Verify Eventually timeout to 8 minutes to better tolerate slow readiness in CI.
test/e2e/support/tas/tuf/tuf.go
test/e2e/support/tas/ctlog/ctlog.go
Enable FIPS checks and retest trigger in Tekton pipelines for operator and bundle builds.
  • Extend manager-pipelinerun-selector filters to include retest-all-comment as a valid Pipelines-as-Code event type.
  • Add fips-check pipeline parameter set to true for pull-request and push pipelines for both operator and bundle.
.tekton/rhtas-operator-bundle-pull-request.yaml
.tekton/rhtas-operator-bundle-push.yaml
.tekton/rhtas-operator-pull-request.yaml
.tekton/rhtas-operator-push.yaml
Declare the operator bundle as FIPS compliant in shipping manifests.
  • Flip features.operators.openshift.io/fips-compliant label from false to true in the bundle Dockerfile.
  • Flip features.operators.openshift.io/fips-compliant annotation from false to true in the ClusterServiceVersion base manifest.
bundle.Dockerfile
config/manifests/bases/rhtas-operator.clusterserviceversion.yaml
Introduce a FIPS-focused E2E test suite that installs Securesign and verifies all components and helper jobs run with kernel FIPS mode enabled.
  • Add fips_test.go Ginkgo Ordered suite that provisions a Securesign instance, tweaks some spec fields, then for each core component (ctlog, fulcio, rekor, TUF, TSA, Trillian, operator manager) and selected jobs (createtree, backfill-redis, TUF init) execs into pods and asserts /proc/sys/crypto/fips_enabled == 1.
  • Create temporary Kubernetes Jobs using the operator’s published images (createtree, backfill-redis, TUF) and verify they run with FIPS enabled, cleaning them up via DeferCleanup.
  • Implement logic to discover the operator controller-manager pod by label and service account suffix before checking its FIPS status.
test/e2e/fips/fips_test.go
Add a dedicated Ginkgo test suite bootstrap for FIPS tests with default timeouts.
  • Create suite_test.go for the FIPS test package, wiring up Ginkgo/Gomega, setting a default Eventually timeout to 3 minutes, enforcing context-based timeouts, and enabling full failure stack output.
test/e2e/fips/suite_test.go
Define an ImageDigestMirrorSet for all Securesign images used by the operator and tests.
  • Add .tekton/images-mirror-set.yaml defining an ImageDigestMirrorSet mapping registry.redhat.io/rhtas/* sources to corresponding quay.io/securesign/* mirrors for all operator, component, and utility images (CT log, client-server, fulcio, rekor, trillian, TSA, TUF, etc.).
.tekton/images-mirror-set.yaml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • In tuf.GetServerPod, consider avoiding the hardcoded "batch.kubernetes.io/job-name" label string (e.g. by using a shared constant) and making the selection deterministic when multiple non-job pods exist (for example by sorting or explicitly preferring Running pods) rather than returning the first range result.
  • The FIPS E2E test in test/e2e/fips/fips_test.go repeats very similar logic for creating a job, waiting for a single pod to appear, and checking /proc/sys/crypto/fips_enabled; extracting a helper function for this pattern would reduce boilerplate and make the tests easier to maintain.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `tuf.GetServerPod`, consider avoiding the hardcoded `"batch.kubernetes.io/job-name"` label string (e.g. by using a shared constant) and making the selection deterministic when multiple non-job pods exist (for example by sorting or explicitly preferring `Running` pods) rather than returning the first range result.
- The FIPS E2E test in `test/e2e/fips/fips_test.go` repeats very similar logic for creating a job, waiting for a single pod to appear, and checking `/proc/sys/crypto/fips_enabled`; extracting a helper function for this pattern would reduce boilerplate and make the tests easier to maintain.

## Individual Comments

### Comment 1
<location> `test/e2e/support/tas/tuf/tuf.go:60-65` </location>
<code_context>
 	_ = cli.List(ctx, list, client.InNamespace(ns), client.MatchingLabels{labels.LabelAppComponent: constants.ComponentName})
-	if len(list.Items) != 1 {
-		return nil
+	for _, pod := range list.Items {
+		if _, hasLabel := pod.Labels["batch.kubernetes.io/job-name"]; !hasLabel {
+			return &pod
+		}
 	}
-	return &list.Items[0]
+	return nil
 }

</code_context>

<issue_to_address>
**issue (bug_risk):** Using `&pod` inside the `for range` loop returns a pointer to the loop variable, not the actual slice element, which will make the helper return the wrong pod when there is more than one item.

Because the range variable is reused, taking `&pod` will always return the address of the same variable, so callers can see the wrong pod (often the last one), leading to incorrect or flaky E2E test behavior.

Instead, take the address of the slice element directly, for example:

```go
for i := range list.Items {
    pod := &list.Items[i]
    if _, hasLabel := pod.Labels["batch.kubernetes.io/job-name"]; !hasLabel {
        return pod
    }
}
```

or:

```go
for i := range list.Items {
    if _, hasLabel := list.Items[i].Labels["batch.kubernetes.io/job-name"]; !hasLabel {
        return &list.Items[i]
    }
}
```
</issue_to_address>

### Comment 2
<location> `test/e2e/fips/fips_test.go:200-79` </location>
<code_context>
+			}).WithContext(ctx).Should(Equal("1"))
+		})
+
+		It("Verify rekor-redis is running in FIPS mode", func(ctx SpecContext) {
+			list := &v1.PodList{}
+			Expect(cli.List(ctx, list,
+				ctrlclient.InNamespace(namespace.Name),
+				ctrlclient.MatchingLabels{labels.LabelAppComponent: rekoractions.RedisDeploymentName},
+			)).To(Succeed())
+			Expect(list.Items).To(HaveLen(1))
+			redis := &list.Items[0]
+
+			out, err := k8ssupport.ExecInPodWithOutput(ctx, redis.Name, rekoractions.RedisDeploymentName, namespace.Name,
+				"cat", "/proc/sys/crypto/fips_enabled",
+			)
+			Expect(err).ToNot(HaveOccurred())
+			Expect(strings.TrimSpace(string(out))).To(Equal("1"))
+		})
+
</code_context>

<issue_to_address>
**suggestion (testing):** Directly listing pods once and immediately exec-ing into them can cause flakes if the pod is not yet running; consider wrapping pod readiness and exec in `Eventually` as done in other tests.

Here you (and in the other FIPS checks like logserver, logsigner, db, rekor-ui) list pods once, assert `HaveLen(1)`, then immediately exec. On slower or busy clusters, the pod may exist but not yet be runnable, causing `ExecInPodWithOutput` to be flaky.

To harden this and align with the job-based tests above, consider wrapping the list + exec in an `Eventually`, e.g.:

```go
list := &v1.PodList{}
Eventually(func(g Gomega) string {
    g.Expect(cli.List(ctx, list,
        ctrlclient.InNamespace(namespace.Name),
        ctrlclient.MatchingLabels{labels.LabelAppComponent: rekoractions.RedisDeploymentName},
    )).To(Succeed())
    g.Expect(list.Items).To(HaveLen(1))

    redis := &list.Items[0]
    out, err := k8ssupport.ExecInPodWithOutput(ctx, redis.Name, rekoractions.RedisDeploymentName, namespace.Name,
        "cat", "/proc/sys/crypto/fips_enabled",
    )
    g.Expect(err).ToNot(HaveOccurred())
    return strings.TrimSpace(string(out))
}).WithContext(ctx).Should(Equal("1"))
```

This waits for both presence and readiness before asserting the FIPS flag, reducing timing-related flakes.

Suggested implementation:

```golang
		It("Verify rekor-redis is running in FIPS mode", func(ctx SpecContext) {
			list := &v1.PodList{}

			Eventually(func(g Gomega) string {
				g.Expect(cli.List(ctx, list,
					ctrlclient.InNamespace(namespace.Name),
					ctrlclient.MatchingLabels{labels.LabelAppComponent: rekoractions.RedisDeploymentName},
				)).To(Succeed())
				g.Expect(list.Items).To(HaveLen(1))

				redis := &list.Items[0]

				out, err := k8ssupport.ExecInPodWithOutput(ctx, redis.Name, rekoractions.RedisDeploymentName, namespace.Name,
					"cat", "/proc/sys/crypto/fips_enabled",
				)
				g.Expect(err).ToNot(HaveOccurred())
				return strings.TrimSpace(string(out))
			}).WithContext(ctx).Should(Equal("1"))
		})

```

To fully align with your review comment and harden all FIPS pod checks against timing flakes, you should make analogous changes for the other FIPS tests that:
1. List pods once, assert `HaveLen(1)`, and immediately call `ExecInPodWithOutput` (e.g., logserver, logsigner, db, rekor-ui).
2. Wrap each such sequence in an `Eventually(func(g Gomega) string { ... }).WithContext(ctx).Should(Equal("1"))`, using the appropriate label selectors and container names for each component.
The pattern should mirror the edited rekor-redis test above and the existing job-based tests already using `Eventually` in this file.
</issue_to_address>

### Comment 3
<location> `test/e2e/fips/suite_test.go:20-23` </location>
<code_context>
+	log.SetLogger(GinkgoLogr)
+	SetDefaultEventuallyTimeout(time.Duration(3) * time.Minute)
+	EnforceDefaultTimeoutsWhenUsingContexts()
+	RunSpecs(t, "Fips Install")
+
+	// print whole stack in case of failure
+	format.MaxLength = 0
+}
</code_context>

<issue_to_address>
**nitpick (bug_risk):** Setting `format.MaxLength = 0` after `RunSpecs` won’t affect failures that already occurred; consider moving it before `RunSpecs` so it takes effect during the suite.

Because `format.MaxLength = 0` runs only after `RunSpecs` completes, it doesn’t influence any failures in this suite. If you want full diffs/stack traces for failing `Expect` calls, set it before `RunSpecs`:

```go
func TestFipsInstall(t *testing.T) {
    format.MaxLength = 0

    RegisterFailHandler(Fail)
    log.SetLogger(GinkgoLogr)
    SetDefaultEventuallyTimeout(3 * time.Minute)
    EnforceDefaultTimeoutsWhenUsingContexts()
    RunSpecs(t, "Fips Install")
}
```

This ensures Gomega uses the desired formatting for all expectations in the FIPS E2E tests.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@JasonPowr JasonPowr requested review from bouskaJ and osmman December 2, 2025 08:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant