Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 90 additions & 0 deletions modules/manage/examples/kubernetes/console-upgrades.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
@operator:none
Feature: Upgrading the operator with Console installed
@skip:gke @skip:aks @skip:eks
Scenario: Console v2 to v3 no warnings
Given I helm install "redpanda-operator" "redpanda/operator" --version v25.1.3 with values:
"""
"""
And I apply Kubernetes manifest:
"""
---
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
name: operator-console-upgrade
spec:
clusterSpec:
console:
# Old versions have broken chart rendering for the console stanza
# unless nameOverride is set due to mapping configmap values for
# both the console deployment and redpanda statefulset to the same
# name. Setting nameOverride to "broken" works around this.
nameOverride: broken
tls:
enabled: false
external:
enabled: false
statefulset:
replicas: 1
sideCars:
image:
tag: dev
repository: localhost/redpanda-operator
"""
And cluster "operator-console-upgrade" is available
Then I can helm upgrade "redpanda-operator" "../operator/chart" with values:
"""
image:
tag: dev
repository: localhost/redpanda-operator
crds:
experimental: true
"""
And cluster "operator-console-upgrade" should be stable with 1 nodes
And the migrated console cluster "operator-console-upgrade" should have 0 warnings

@skip:gke @skip:aks @skip:eks
Scenario: Console v2 to v3 with warnings
Given I helm install "redpanda-operator" "redpanda/operator" --version v25.1.3 with values:
"""
"""
And I apply Kubernetes manifest:
"""
---
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
name: operator-console-upgrade-warnings
spec:
clusterSpec:
console:
nameOverride: broken
console:
roleBindings:
- roleName: admin
subjects:
- kind: group
provider: OIDC
name: devs
tls:
enabled: false
external:
enabled: false
statefulset:
replicas: 1
sideCars:
image:
tag: dev
repository: localhost/redpanda-operator
"""
And cluster "operator-console-upgrade-warnings" is available
Then I can helm upgrade "redpanda-operator" "../operator/chart" with values:
"""
image:
tag: dev
repository: localhost/redpanda-operator
crds:
experimental: true
"""
And cluster "operator-console-upgrade-warnings" should be stable with 1 nodes
And the migrated console cluster "operator-console-upgrade-warnings" should have 1 warning
180 changes: 178 additions & 2 deletions modules/manage/examples/kubernetes/role-crds.feature
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ Feature: Role CRDs
# In this example manifest, a role CRD called "travis-role" manages ACLs for an existing role.
# The role includes authorization rules that allow reading from topics with names starting with "some-topic".
# This example assumes that you already have a role called "travis-role" in your cluster.
# Note: When the CRD is deleted, the operator will remove both the role and ACLs since it takes full ownership.
---
apiVersion: cluster.redpanda.com/v1alpha2
kind: RedpandaRole
Expand All @@ -106,5 +107,180 @@ Feature: Role CRDs
"""
And role "travis-role" is successfully synced
And I delete the CRD role "travis-role"
Then there should still be role "travis-role" in cluster "sasl"
And there should be no ACLs for role "travis-role" in cluster "sasl"
Then there should be no role "travis-role" in cluster "sasl"

@skip:gke @skip:aks @skip:eks
Scenario: Add managed principals to the role
Given there is no role "team-role" in cluster "sasl"
And there are the following pre-existing users in cluster "sasl"
| name | password | mechanism |
| user1 | password | SCRAM-SHA-256 |
| user2 | password | SCRAM-SHA-256 |
| user3 | password | SCRAM-SHA-256 |
When I apply Kubernetes manifest:
"""
apiVersion: cluster.redpanda.com/v1alpha2
kind: RedpandaRole
metadata:
name: team-role
spec:
cluster:
clusterRef:
name: sasl
principals:
- User:user1
- User:user2
"""
And role "team-role" is successfully synced
Then role "team-role" should exist in cluster "sasl"
And role "team-role" should have members "user1 and user2" in cluster "sasl"
And RedpandaRole "team-role" should have status field "managedPrincipals" set to "true"
When I apply Kubernetes manifest:
"""
apiVersion: cluster.redpanda.com/v1alpha2
kind: RedpandaRole
metadata:
name: team-role
spec:
cluster:
clusterRef:
name: sasl
principals:
- User:user1
- User:user2
- User:user3
"""
And role "team-role" is successfully synced
Then role "team-role" should have members "user1 and user2 and user3" in cluster "sasl"
And RedpandaRole "team-role" should have status field "managedPrincipals" set to "true"

@skip:gke @skip:aks @skip:eks
Scenario: Remove managed principals from the role
Given there is no role "shrinking-role" in cluster "sasl"
And there are the following pre-existing users in cluster "sasl"
| name | password | mechanism |
| dev1 | password | SCRAM-SHA-256 |
| dev2 | password | SCRAM-SHA-256 |
| dev3 | password | SCRAM-SHA-256 |
When I apply Kubernetes manifest:
"""
apiVersion: cluster.redpanda.com/v1alpha2
kind: RedpandaRole
metadata:
name: shrinking-role
spec:
cluster:
clusterRef:
name: sasl
principals:
- User:dev1
- User:dev2
- User:dev3
"""
And role "shrinking-role" is successfully synced
Then role "shrinking-role" should have members "dev1 and dev2 and dev3" in cluster "sasl"
And RedpandaRole "shrinking-role" should have status field "managedPrincipals" set to "true"
When I apply Kubernetes manifest:
"""
apiVersion: cluster.redpanda.com/v1alpha2
kind: RedpandaRole
metadata:
name: shrinking-role
spec:
cluster:
clusterRef:
name: sasl
principals:
- User:dev1
"""
And role "shrinking-role" is successfully synced
Then role "shrinking-role" should have members "dev1" in cluster "sasl"
And role "shrinking-role" should not have member "dev2" in cluster "sasl"
And role "shrinking-role" should not have member "dev3" in cluster "sasl"
And RedpandaRole "shrinking-role" should have status field "managedPrincipals" set to "true"

@skip:gke @skip:aks @skip:eks
Scenario: Stop managing principals
Given there is no role "clearing-role" in cluster "sasl"
And there are the following pre-existing users in cluster "sasl"
| name | password | mechanism |
| temp1 | password | SCRAM-SHA-256 |
| temp2 | password | SCRAM-SHA-256 |
When I apply Kubernetes manifest:
"""
apiVersion: cluster.redpanda.com/v1alpha2
kind: RedpandaRole
metadata:
name: clearing-role
spec:
cluster:
clusterRef:
name: sasl
principals:
- User:temp1
- User:temp2
"""
And role "clearing-role" is successfully synced
Then role "clearing-role" should have members "temp1 and temp2" in cluster "sasl"
And RedpandaRole "clearing-role" should have status field "managedPrincipals" set to "true"
When I apply Kubernetes manifest:
"""
apiVersion: cluster.redpanda.com/v1alpha2
kind: RedpandaRole
metadata:
name: clearing-role
spec:
cluster:
clusterRef:
name: sasl
principals: []
"""
And role "clearing-role" is successfully synced
Then RedpandaRole "clearing-role" should have no members in cluster "sasl"
And RedpandaRole "clearing-role" should have status field "managedPrincipals" set to "false"

@skip:gke @skip:aks @skip:eks
Scenario: Replace all managed principals
Given there is no role "swap-role" in cluster "sasl"
And there are the following pre-existing users in cluster "sasl"
| name | password | mechanism |
| olduser1| password | SCRAM-SHA-256 |
| olduser2| password | SCRAM-SHA-256 |
| newuser1| password | SCRAM-SHA-256 |
| newuser2| password | SCRAM-SHA-256 |
When I apply Kubernetes manifest:
"""
apiVersion: cluster.redpanda.com/v1alpha2
kind: RedpandaRole
metadata:
name: swap-role
spec:
cluster:
clusterRef:
name: sasl
principals:
- User:olduser1
- User:olduser2
"""
And role "swap-role" is successfully synced
Then role "swap-role" should have members "olduser1 and olduser2" in cluster "sasl"
And RedpandaRole "swap-role" should have status field "managedPrincipals" set to "true"
When I apply Kubernetes manifest:
"""
apiVersion: cluster.redpanda.com/v1alpha2
kind: RedpandaRole
metadata:
name: swap-role
spec:
cluster:
clusterRef:
name: sasl
principals:
- User:newuser1
- User:newuser2
"""
And role "swap-role" is successfully synced
Then role "swap-role" should have members "newuser1 and newuser2" in cluster "sasl"
And role "swap-role" should not have member "olduser1" in cluster "sasl"
And role "swap-role" should not have member "olduser2" in cluster "sasl"
And RedpandaRole "swap-role" should have status field "managedPrincipals" set to "true"
47 changes: 47 additions & 0 deletions modules/manage/examples/kubernetes/upgrade.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
Feature: Upgrading redpanda
@skip:gke @skip:aks @skip:eks @skip:k3d
Scenario: Redpanda upgrade from 25.2.11
Comment on lines +2 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

Verify the skip tags are appropriate for a documentation example.

The scenario skips all major cloud-managed Kubernetes environments (GKE, AKS, EKS) and K3D. If this test is intended as a documentation example for users, they may attempt to follow it on these platforms. Ensure these skip tags align with actual platform limitations or move this to a non-example location if it's only for internal testing.

Run the following script to check if there are other upgrade examples without these restrictions:


🏁 Script executed:

#!/bin/bash
# Description: Find other upgrade feature files to compare skip tag usage

# Search for upgrade-related feature files
fd -e feature . | xargs rg -l "upgrade" -i

# Check skip tag patterns in feature files
rg "@skip:" -A 2 -B 1 --type gherkin modules/manage/examples/

Length of output: 194


🏁 Script executed:

#!/bin/bash
# Description: Extract and compare skip tags across upgrade feature files

echo "=== upgrade.feature skip tags ==="
rg "@skip:" modules/manage/examples/kubernetes/upgrade.feature -A 1 -B 1 || echo "No skip tags found"

echo ""
echo "=== operator-upgrades.feature skip tags ==="
rg "@skip:" modules/manage/examples/kubernetes/operator-upgrades.feature -A 1 -B 1 || echo "No skip tags found"

echo ""
echo "=== cluster.feature skip tags ==="
rg "@skip:" modules/manage/examples/kubernetes/cluster.feature -A 1 -B 1 || echo "No skip tags found"

echo ""
echo "=== All @skip patterns across feature files ==="
rg "@skip:" modules/manage/examples/kubernetes/ -c

Length of output: 1208


🏁 Script executed:

#!/bin/bash
# Description: Examine the full content of upgrade.feature to understand k3d skip rationale

echo "=== Full upgrade.feature content ==="
cat modules/manage/examples/kubernetes/upgrade.feature

echo ""
echo "=== Check for k3d mentions in any feature files ==="
rg "k3d" modules/manage/examples/kubernetes/ -i -B 2 -A 2

Length of output: 1661


Remove or document the @Skip:k3d tag—it's inconsistent with similar feature files.

The gke/aks/eks skip tags are consistently applied across all example feature files, but upgrade.feature is the only one that additionally skips k3d. No documentation explains this k3d-specific limitation. Either remove this skip tag to align with operator-upgrades.feature and cluster.feature, or add a comment explaining why upgrades specifically require k3d to be skipped.

🤖 Prompt for AI Agents
In modules/manage/examples/kubernetes/upgrade.feature around lines 2 to 3, the
file includes an extra @skip:k3d tag that is inconsistent with other example
feature files; either remove the @skip:k3d tag to match
operator-upgrades.feature and cluster.feature, or add a short comment on the
same lines explaining why k3d must be skipped for upgrades (e.g., specific k3d
limitation or known issue ID), so update the file by deleting the @skip:k3d
token if there is no k3d-specific reason, or leaving it and adding a one-line
comment documenting the reason.

Given I apply Kubernetes manifest:
"""
---
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
name: cluster-upgrade
spec:
clusterSpec:
image:
repository: redpandadata/redpanda
tag: v25.2.11
console:
enabled: false
statefulset:
replicas: 3
sideCars:
image:
tag: dev
repository: localhost/redpanda-operator
"""
And cluster "cluster-upgrade" should be stable with 3 nodes
Then I apply Kubernetes manifest:
"""
---
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
name: cluster-upgrade
spec:
clusterSpec:
image:
repository: redpandadata/redpanda-unstable
tag: v25.3.1-rc4
console:
enabled: false
statefulset:
replicas: 3
sideCars:
image:
tag: dev
repository: localhost/redpanda-operator
"""
And cluster "cluster-upgrade" should be stable with 3 nodes