Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shared Volumes #919

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions infrastructure/certificate-provisioning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,14 +111,14 @@ By default, cert-manager has been configured to prove the control of the domain
labels:
use-dns01-solver: "true"
```

## Synchronize digital certificates between namespaces
❗❗ `Kubed is no longer available and has been superseded by ConfigSyncer`

In different scenarios, it may happen to have different `Ingress` resources in different namespaces which refer to the same domain (with different paths). Unfortunately, annotating all these ingresses with the `cert-manager.io/cluster-issuer` annotation soon leads to hitting the Let's Encrypt rate limits. Hence, it is necessary to introduce some mechanism to synchronize the secret generated between multiple namespaces. One of the projects currently providing a solution to this problem is [kubed](https://github.com/appscode/kubed).

### Install kubed

Kubed can be easily installed with helm [[5]](https://appscode.com/products/kubed/v0.12.0/setup/install/).
Kubed can be easily installed with helm [[5]](https://web.archive.org/web/20230605163413/https://appscode.com/products/kubed/v0.12.0/setup/install/).

```bash
helm repo add appscode https://charts.appscode.com/stable/
Expand Down
7 changes: 7 additions & 0 deletions operators/api/v1alpha2/common.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,9 @@
SubscrFailed SubscriptionStatus = "Failed"
)

// TemplateLabelPrefix is the prefix of a label assigned to a sharedvolume indicating it is mounted on a template.
const TemplateLabelPrefix = "crownlabs.polito.it/template-"

// WorkspaceLabelPrefix is the prefix of a label assigned to a tenant indicating it is subscribed to a workspace.
const WorkspaceLabelPrefix = "crownlabs.polito.it/workspace-"

Expand All @@ -57,3 +60,7 @@

// TnOperatorFinalizerName is the name of the finalizer corresponding to the tenant operator.
const TnOperatorFinalizerName = "crownlabs.polito.it/tenant-operator"

// InstOperatorFinalizerName is the name of the finalizer corresponding to the instance operator.
// TODO: Non dovremmo specificare in qualche modo che il finalizer non è per l'inst in sè ma per lo shvol

Check failure on line 65 in operators/api/v1alpha2/common.go

View workflow job for this annotation

GitHub Actions / Lint golang files

Comment should end in a period (godot)
const InstOperatorFinalizerName = "crownlabs.polito.it/instance-operator"
96 changes: 96 additions & 0 deletions operators/api/v1alpha2/sharedvolume_types.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
// Copyright 2020-2025 Politecnico di Torino
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package v1alpha2

import (
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.

// +kubebuilder:validation:Enum="";"Pending";"Provisioning";"Ready";"ResourceQuotaExceeded";"Error"

// SharedVolumePhase is an enumeration of the different phases associated with a SharedVolume.
type SharedVolumePhase string

const (
// SharedVolumePhaseUnset -> the shared volume phase is unknown.
SharedVolumePhaseUnset SharedVolumePhase = ""
// SharedVolumePhasePending -> the shared volume is pending.
SharedVolumePhasePending SharedVolumePhase = "Pending"
// SharedVolumePhaseProvisioning -> the shared volume's PVC is under provisioning.
SharedVolumePhaseProvisioning SharedVolumePhase = "Provisioning"
// SharedVolumePhaseReady -> the shared volume is bound and ready to be accessed.
SharedVolumePhaseReady SharedVolumePhase = "Ready"
// SharedVolumePhaseResourceQuotaExceeded -> the shared volume could not be created because the resource quota is exceeded.
SharedVolumePhaseResourceQuotaExceeded SharedVolumePhase = "ResourceQuotaExceeded"
// SharedVolumePhaseError -> the shared volume had an error during reconcile.
SharedVolumePhaseError SharedVolumePhase = "Error"
)

// SharedVolumeSpec is the specification of the desired state of the Shared Volume.
type SharedVolumeSpec struct {
// The human-readable name of the Shared Volume.
PrettyName string `json:"prettyName"`

// The size of the volume.
Size resource.Quantity `json:"size"`
}

// SharedVolumeStatus reflects the most recently observed status of the Shared Volume.
type SharedVolumeStatus struct {
// The NFS server address.
ServerAddress string `json:"serverAddress,omitempty"`

// The NFS path.
ExportPath string `json:"exportPath,omitempty"`

// The current phase of the lifecycle of the Shared Volume.
Phase SharedVolumePhase `json:"phase,omitempty"`
}

// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:resource:shortName="shvol"
// +kubebuilder:storageversion
// +kubebuilder:printcolumn:name="Pretty Name",type=string,JSONPath=`.spec.prettyName`
// +kubebuilder:printcolumn:name="Size",type=string,JSONPath=`.spec.size`
// +kubebuilder:printcolumn:name="Phase",type=string,JSONPath=`.status.phase`
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`

// SharedVolume describes a shared volume between tenants in CrownLabs.
type SharedVolume struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec SharedVolumeSpec `json:"spec,omitempty"`
Status SharedVolumeStatus `json:"status,omitempty"`
}

// +kubebuilder:object:root=true

// SharedVolumeList contains a list of SharedVolume objects.
type SharedVolumeList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`

Items []SharedVolume `json:"items"`
}

func init() {
SchemeBuilder.Register(&SharedVolume{}, &SharedVolumeList{})
}
15 changes: 15 additions & 0 deletions operators/api/v1alpha2/template_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,9 @@ type Environment struct {
// +kubebuilder:default=true
// Whether the instance has to have the user's MyDrive volume
MountMyDriveVolume bool `json:"mountMyDriveVolume"`

// The list of information about Shared Volumes that has to be mounted to the instance.
SharedVolumeMounts []SharedVolumeMountInfo `json:"sharedVolumeMounts,omitempty"`
}

// EnvironmentResources is the specification of the amount of resources
Expand Down Expand Up @@ -175,6 +178,18 @@ type ContainerStartupOpts struct {
EnforceWorkdir bool `json:"enforceWorkdir"`
}

// SharedVolumeMountInfo contains mount information for a Shared Volume.
type SharedVolumeMountInfo struct {
// The reference of the Shared Volume this Mount Info is related to.
SharedVolumeRef GenericRef `json:"sharedVolume"`

// The path the Shared Volume will be mounted in.
MountPath string `json:"mountPath"`

// Whether this Shared Volume should be mounted with R/W or R/O permission.
ReadOnly bool `json:"readOnly"`
}

// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:resource:shortName="tmpl"
Expand Down
111 changes: 111 additions & 0 deletions operators/api/v1alpha2/zz_generated.deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

15 changes: 15 additions & 0 deletions operators/cmd/instance-operator/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ import (
instancesnapshot_controller "github.com/netgroup-polito/CrownLabs/operators/pkg/instancesnapshot-controller"
"github.com/netgroup-polito/CrownLabs/operators/pkg/instautoctrl"
"github.com/netgroup-polito/CrownLabs/operators/pkg/instctrl"
"github.com/netgroup-polito/CrownLabs/operators/pkg/shvolctrl"
"github.com/netgroup-polito/CrownLabs/operators/pkg/utils/restcfg"
)

Expand Down Expand Up @@ -71,6 +72,8 @@ func main() {
"which the controller will work. Different labels (key=value) can be specified, by separating them with a &"+
"( e.g. key1=value1&key2=value2")

sharedVolumeStorageClass := flag.String("shared-volume-storage-class", "rook-nfs", "The StorageClass to be used for all SharedVolumes' PVC (if unique can be used to enforce ResourceQuota on Workspaces, about number and size of ShVols)")

maxConcurrentTerminationReconciles := flag.Int("max-concurrent-reconciles-termination", 1, "The maximum number of concurrent Reconciles which can be run for the Instance Termination controller")
instanceTerminationStatusCheckTimeout := flag.Duration("instance-termination-status-check-timeout", 3*time.Second, "The maximum time to wait for the status check for Instances that require it")
instanceTerminationStatusCheckInterval := flag.Duration("instance-termination-status-check-interval", 2*time.Minute, "The interval to check the status of Instances that require it")
Expand Down Expand Up @@ -173,6 +176,18 @@ func main() {
os.Exit(1)
}

// Configure the SharedVolume controller
const sharedVolumeCtrl = "SharedVolume"
if err := (&shvolctrl.SharedVolumeReconciler{
Client: mgr.GetClient(), //TODO: Funziona anche senza Scheme, lo rimettiamo?
EventsRecorder: mgr.GetEventRecorderFor(sharedVolumeCtrl),
NamespaceWhitelist: nsWhitelist,
PVCStorageClass: *sharedVolumeStorageClass,
}).SetupWithManager(mgr, *maxConcurrentSubmissionReconciles); err != nil {
log.Error(err, "unable to create controller", "controller", sharedVolumeCtrl)
os.Exit(1)
}

// Add readiness probe
err = mgr.AddReadyzCheck("ready-ping", healthz.Ping)
if err != nil {
Expand Down
Loading
Loading