-
Notifications
You must be signed in to change notification settings - Fork 23
Enable fips checks and add basic E2E #1479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Reviewer's GuideEnables FIPS support and validation for the rhtas operator by turning on FIPS flags/labels in pipelines and manifests, adding an ImageDigestMirrorSet, tightening some E2E helpers, and introducing a dedicated FIPS E2E test suite that verifies all components run with FIPS enabled. Sequence diagram for PR validation with FIPS checks and retest-all-commentsequenceDiagram
actor Developer
participant GitHub
participant PipelinesAsCode
participant TektonPRPipeline as Tekton_PR_Pipeline
participant OpenShiftCluster as OpenShift_Cluster
participant FIPSE2ESuite as FIPS_E2E_Suite
Developer->>GitHub: Open PR or push commits
GitHub-->>PipelinesAsCode: PR event (pull_request)
PipelinesAsCode->>TektonPRPipeline: Start PR pipeline
TektonPRPipeline->>TektonPRPipeline: Set param manager-pipelinerun-selector
TektonPRPipeline->>TektonPRPipeline: Set param fips-check=true
TektonPRPipeline->>OpenShiftCluster: Build and deploy test operator
TektonPRPipeline->>FIPSE2ESuite: Run tests gated by fips-check
FIPSE2ESuite->>OpenShiftCluster: Inspect components for FIPS mode
FIPSE2ESuite-->>TektonPRPipeline: Report FIPS validation result
Developer->>GitHub: Add comment /retest-all-comment
GitHub-->>PipelinesAsCode: Comment event (retest-all-comment)
PipelinesAsCode->>TektonPRPipeline: Rerun PR pipeline
TektonPRPipeline->>TektonPRPipeline: Match event-type in (pull_request,incoming,retest-all-comment)
TektonPRPipeline->>FIPSE2ESuite: Re-run FIPS E2E tests
FIPSE2ESuite-->>Developer: Result via PR status checks
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there - I've reviewed your changes - here's some feedback:
- In
tuf.GetServerPod, consider avoiding the hardcoded"batch.kubernetes.io/job-name"label string (e.g. by using a shared constant) and making the selection deterministic when multiple non-job pods exist (for example by sorting or explicitly preferringRunningpods) rather than returning the first range result. - The FIPS E2E test in
test/e2e/fips/fips_test.gorepeats very similar logic for creating a job, waiting for a single pod to appear, and checking/proc/sys/crypto/fips_enabled; extracting a helper function for this pattern would reduce boilerplate and make the tests easier to maintain.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `tuf.GetServerPod`, consider avoiding the hardcoded `"batch.kubernetes.io/job-name"` label string (e.g. by using a shared constant) and making the selection deterministic when multiple non-job pods exist (for example by sorting or explicitly preferring `Running` pods) rather than returning the first range result.
- The FIPS E2E test in `test/e2e/fips/fips_test.go` repeats very similar logic for creating a job, waiting for a single pod to appear, and checking `/proc/sys/crypto/fips_enabled`; extracting a helper function for this pattern would reduce boilerplate and make the tests easier to maintain.
## Individual Comments
### Comment 1
<location> `test/e2e/support/tas/tuf/tuf.go:60-65` </location>
<code_context>
_ = cli.List(ctx, list, client.InNamespace(ns), client.MatchingLabels{labels.LabelAppComponent: constants.ComponentName})
- if len(list.Items) != 1 {
- return nil
+ for _, pod := range list.Items {
+ if _, hasLabel := pod.Labels["batch.kubernetes.io/job-name"]; !hasLabel {
+ return &pod
+ }
}
- return &list.Items[0]
+ return nil
}
</code_context>
<issue_to_address>
**issue (bug_risk):** Using `&pod` inside the `for range` loop returns a pointer to the loop variable, not the actual slice element, which will make the helper return the wrong pod when there is more than one item.
Because the range variable is reused, taking `&pod` will always return the address of the same variable, so callers can see the wrong pod (often the last one), leading to incorrect or flaky E2E test behavior.
Instead, take the address of the slice element directly, for example:
```go
for i := range list.Items {
pod := &list.Items[i]
if _, hasLabel := pod.Labels["batch.kubernetes.io/job-name"]; !hasLabel {
return pod
}
}
```
or:
```go
for i := range list.Items {
if _, hasLabel := list.Items[i].Labels["batch.kubernetes.io/job-name"]; !hasLabel {
return &list.Items[i]
}
}
```
</issue_to_address>
### Comment 2
<location> `test/e2e/fips/fips_test.go:200-79` </location>
<code_context>
+ }).WithContext(ctx).Should(Equal("1"))
+ })
+
+ It("Verify rekor-redis is running in FIPS mode", func(ctx SpecContext) {
+ list := &v1.PodList{}
+ Expect(cli.List(ctx, list,
+ ctrlclient.InNamespace(namespace.Name),
+ ctrlclient.MatchingLabels{labels.LabelAppComponent: rekoractions.RedisDeploymentName},
+ )).To(Succeed())
+ Expect(list.Items).To(HaveLen(1))
+ redis := &list.Items[0]
+
+ out, err := k8ssupport.ExecInPodWithOutput(ctx, redis.Name, rekoractions.RedisDeploymentName, namespace.Name,
+ "cat", "/proc/sys/crypto/fips_enabled",
+ )
+ Expect(err).ToNot(HaveOccurred())
+ Expect(strings.TrimSpace(string(out))).To(Equal("1"))
+ })
+
</code_context>
<issue_to_address>
**suggestion (testing):** Directly listing pods once and immediately exec-ing into them can cause flakes if the pod is not yet running; consider wrapping pod readiness and exec in `Eventually` as done in other tests.
Here you (and in the other FIPS checks like logserver, logsigner, db, rekor-ui) list pods once, assert `HaveLen(1)`, then immediately exec. On slower or busy clusters, the pod may exist but not yet be runnable, causing `ExecInPodWithOutput` to be flaky.
To harden this and align with the job-based tests above, consider wrapping the list + exec in an `Eventually`, e.g.:
```go
list := &v1.PodList{}
Eventually(func(g Gomega) string {
g.Expect(cli.List(ctx, list,
ctrlclient.InNamespace(namespace.Name),
ctrlclient.MatchingLabels{labels.LabelAppComponent: rekoractions.RedisDeploymentName},
)).To(Succeed())
g.Expect(list.Items).To(HaveLen(1))
redis := &list.Items[0]
out, err := k8ssupport.ExecInPodWithOutput(ctx, redis.Name, rekoractions.RedisDeploymentName, namespace.Name,
"cat", "/proc/sys/crypto/fips_enabled",
)
g.Expect(err).ToNot(HaveOccurred())
return strings.TrimSpace(string(out))
}).WithContext(ctx).Should(Equal("1"))
```
This waits for both presence and readiness before asserting the FIPS flag, reducing timing-related flakes.
Suggested implementation:
```golang
It("Verify rekor-redis is running in FIPS mode", func(ctx SpecContext) {
list := &v1.PodList{}
Eventually(func(g Gomega) string {
g.Expect(cli.List(ctx, list,
ctrlclient.InNamespace(namespace.Name),
ctrlclient.MatchingLabels{labels.LabelAppComponent: rekoractions.RedisDeploymentName},
)).To(Succeed())
g.Expect(list.Items).To(HaveLen(1))
redis := &list.Items[0]
out, err := k8ssupport.ExecInPodWithOutput(ctx, redis.Name, rekoractions.RedisDeploymentName, namespace.Name,
"cat", "/proc/sys/crypto/fips_enabled",
)
g.Expect(err).ToNot(HaveOccurred())
return strings.TrimSpace(string(out))
}).WithContext(ctx).Should(Equal("1"))
})
```
To fully align with your review comment and harden all FIPS pod checks against timing flakes, you should make analogous changes for the other FIPS tests that:
1. List pods once, assert `HaveLen(1)`, and immediately call `ExecInPodWithOutput` (e.g., logserver, logsigner, db, rekor-ui).
2. Wrap each such sequence in an `Eventually(func(g Gomega) string { ... }).WithContext(ctx).Should(Equal("1"))`, using the appropriate label selectors and container names for each component.
The pattern should mirror the edited rekor-redis test above and the existing job-based tests already using `Eventually` in this file.
</issue_to_address>
### Comment 3
<location> `test/e2e/fips/suite_test.go:20-23` </location>
<code_context>
+ log.SetLogger(GinkgoLogr)
+ SetDefaultEventuallyTimeout(time.Duration(3) * time.Minute)
+ EnforceDefaultTimeoutsWhenUsingContexts()
+ RunSpecs(t, "Fips Install")
+
+ // print whole stack in case of failure
+ format.MaxLength = 0
+}
</code_context>
<issue_to_address>
**nitpick (bug_risk):** Setting `format.MaxLength = 0` after `RunSpecs` won’t affect failures that already occurred; consider moving it before `RunSpecs` so it takes effect during the suite.
Because `format.MaxLength = 0` runs only after `RunSpecs` completes, it doesn’t influence any failures in this suite. If you want full diffs/stack traces for failing `Expect` calls, set it before `RunSpecs`:
```go
func TestFipsInstall(t *testing.T) {
format.MaxLength = 0
RegisterFailHandler(Fail)
log.SetLogger(GinkgoLogr)
SetDefaultEventuallyTimeout(3 * time.Minute)
EnforceDefaultTimeoutsWhenUsingContexts()
RunSpecs(t, "Fips Install")
}
```
This ensures Gomega uses the desired formatting for all expectations in the FIPS E2E tests.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
Summary by Sourcery
Enable FIPS support and validation for the operator and its components, and extend CI/CD configuration to exercise FIPS checks and image mirroring.
New Features:
Enhancements:
Build:
Deployment:
Tests: