diff --git a/docs/content/manual/harvester-rancher/30-configure-harvester-loadbalancer-service.md b/docs/content/manual/harvester-rancher/30-configure-harvester-loadbalancer-service.md index 8442200e5..f5dfa18ac 100644 --- a/docs/content/manual/harvester-rancher/30-configure-harvester-loadbalancer-service.md +++ b/docs/content/manual/harvester-rancher/30-configure-harvester-loadbalancer-service.md @@ -4,35 +4,35 @@ title: 30-Configure Harvester LoadBalancer service Prerequisite: Already provision RKE1/RKE2 cluster in previous test case -- Open `Global Settings` in hamburger menu -- Replace `ui-dashboard-index` to `https://releases.rancher.com/harvester-ui/dashboard/latest/index.html` -- Change `ui-offline-preferred` to `Remote` -- Refresh the current page (ctrl + r) -- Open provisioned RKE2 cluster from hamburger menu -- Drop down `Service Discovery` -- Click `Services` -- Click Create -- Select `Load Balancer` +1. Open `Global Settings` in hamburger menu +1. Replace `ui-dashboard-index` to `https://releases.rancher.com/harvester-ui/dashboard/latest/index.html` +1. Change `ui-offline-preferred` to `Remote` +1. Refresh the current page (ctrl + r) +1. Open provisioned RKE2 cluster from hamburger menu +1. Drop down `Service Discovery` +1. Click `Services` +1. Click Create +1. Select `Load Balancer` ![image.png](https://images.zenhubusercontent.com/61519853321ea20d65443929/f628094c-a195-4f99-9fb7-858d759dc019) -- Given service name to make the load balancer name composed of the cluster name, namespace, svc name, and suffix(8 characters) more than 63 characters -- Provide Listening port and Target port +1. Given service name to make the load balancer name composed of the cluster name, namespace, svc name, and suffix(8 characters) more than 63 characters +1. Provide Listening port and Target port ![image.png](https://images.zenhubusercontent.com/61519853321ea20d65443929/2c20c759-4769-438b-94ad-5b995ba66873) -- Click `Add-on Config` -- Select Health Check port -- Select `dhcp` as IPAM mode -- Provide Health Check Threshold -- Provide Health Check Failure Threshold -- Provide Health Check Period -- Provide Health Check Timeout -- Click Create button +1. Click `Add-on Config` +1. Select Health Check port +1. Select `dhcp` as IPAM mode +1. Provide Health Check Threshold +1. Provide Health Check Failure Threshold +1. Provide Health Check Period +1. Provide Health Check Timeout +1. Click Create button ![image.png](https://images.zenhubusercontent.com/61519853321ea20d65443929/a8d11df6-cc76-4897-8310-def670682775) -- Create another load balancer service with the name characters. +1. Create another load balancer service with the name characters. ## Expected Results - Can create the load balance service correctly diff --git a/docs/content/manual/harvester-rancher/31-specify-pool-ipam-mode-loadbalancer-service.md b/docs/content/manual/harvester-rancher/31-specify-pool-ipam-mode-loadbalancer-service.md index a7e7344cf..f313a01d8 100644 --- a/docs/content/manual/harvester-rancher/31-specify-pool-ipam-mode-loadbalancer-service.md +++ b/docs/content/manual/harvester-rancher/31-specify-pool-ipam-mode-loadbalancer-service.md @@ -8,6 +8,10 @@ Already provision RKE1/RKE2 cluster in previous test case 1. Replace `ui-dashboard-index` to `https://releases.rancher.com/harvester-ui/dashboard/latest/index.html` 1. Change `ui-offline-preferred` to `Remote` 1. Refresh the current page (ctrl + r) +1. Access Harvester dashboard UI +1. Go to Settings +1. Create a vip-pool in Harvester settings. + ![image](https://user-images.githubusercontent.com/29251855/158514040-bfcd9ff3-964a-4511-94d7-a497ef88848f.png) 1. Open provisioned RKE2 cluster from hamburger menu 1. Drop down `Service Discovery` 1. Click `Services` @@ -34,4 +38,4 @@ Already provision RKE1/RKE2 cluster in previous test case ## Expected Results 1. Can create load balance service correctly -1. Can operate and foward workload as expected \ No newline at end of file +1. Can operate and route to deployed service correctly \ No newline at end of file diff --git a/docs/content/manual/harvester-rancher/57-import-airgapped-harvester-from-airgapped-rancher-rke1.md b/docs/content/manual/harvester-rancher/57-import-airgapped-harvester-from-airgapped-rancher-proxy.md similarity index 97% rename from docs/content/manual/harvester-rancher/57-import-airgapped-harvester-from-airgapped-rancher-rke1.md rename to docs/content/manual/harvester-rancher/57-import-airgapped-harvester-from-airgapped-rancher-proxy.md index 2f13b768e..7d6b54af1 100644 --- a/docs/content/manual/harvester-rancher/57-import-airgapped-harvester-from-airgapped-rancher-rke1.md +++ b/docs/content/manual/harvester-rancher/57-import-airgapped-harvester-from-airgapped-rancher-proxy.md @@ -1,5 +1,5 @@ --- -title: 57-Import airgapped harvester from airgapped rancher rke1 +title: 57-Import airgapped harvester from airgapped rancher with Proxy --- * Related task: [#1052](https://github.com/harvester/harvester/issues/1052) Test Air gap with Rancher integration diff --git a/docs/content/manual/harvester-rancher/59-create-k3s-kubernetes-cluster.md b/docs/content/manual/harvester-rancher/59-create-k3s-kubernetes-cluster.md new file mode 100644 index 000000000..773a1789b --- /dev/null +++ b/docs/content/manual/harvester-rancher/59-create-k3s-kubernetes-cluster.md @@ -0,0 +1,49 @@ +--- +title: 59-Create K3s Kubernetes Cluster +--- +1. Click Cluster Management +1. Click Cloud Credentials +1. Click create and select `Harvester` +1. Input credential name +1. Select existing cluster in the `Imported Cluster` list +1. Click Create + +![image.png](https://images.zenhubusercontent.com/61519853321ea20d65443929/4a2f6a52-dac7-4a27-84b3-14cbeb4156aa) + +1. Click Clusters +1. Click Create +1. Toggle RKE2/K3s +1. Select Harvester +1. Input `Cluster Name` +1. Select `default` namespace +1. Select ubuntu image +1. Select network `vlan1` +1. Input SSH User: `ubuntu` +![image](https://user-images.githubusercontent.com/29251855/166188165-588adc48-fb41-4a01-a59e-9b059eb06949.png) +1. Click `Show Advanced` +1. Add the following user data: + ``` + password: 123456 + chpasswd: { expire: false } + ssh_pwauth: true + ``` + ![image](https://user-images.githubusercontent.com/29251855/166188400-2e5e3051-f5ce-4b40-8497-71d6ff3cfdfa.png) + +1. Click the drop down Kubernetes version list +1. Select K3s kubernetes version + ![image](https://user-images.githubusercontent.com/29251855/165777245-6059f10d-da2f-49d3-9da3-3b72491f7051.png) + +1. Click `Advanced` +1. Add `Arguments` +1. Add `cloud-provider=external` + ![image](https://user-images.githubusercontent.com/29251855/166189212-d422a433-7ac7-4f26-80fd-452c6df966ae.png) +1. Click Create +1. Wait for K3s cluster provisioning complete + +## Expected Results +1. Provision K3s cluster successfully with `Running` status +![image](https://user-images.githubusercontent.com/29251855/166189728-d98b92a3-aa0d-44a8-951c-e637f9530031.png) + +1. Can acccess K3s cluster to check all resources and services +![image](https://user-images.githubusercontent.com/29251855/166189812-2fa41514-e416-4a0d-91ec-537ecd9a3b00.png) + diff --git a/docs/content/manual/harvester-rancher/60-delete-k3s-kubernetes-cluster.md b/docs/content/manual/harvester-rancher/60-delete-k3s-kubernetes-cluster.md new file mode 100644 index 000000000..6c0b9fdc4 --- /dev/null +++ b/docs/content/manual/harvester-rancher/60-delete-k3s-kubernetes-cluster.md @@ -0,0 +1,13 @@ +--- +title: 60-Delete K3s Kubernetes Cluster +--- +1. Open Cluster Management +1. Check provisioned K3s cluster +1. Click `Delete` from menu + + +## Expected Results +1. Can remove K3s Cluster and disapper on Cluster page +1. K3s Cluster will be removed from rancher menu under explore cluster +1. K3s virtual machine should be also be removed from Harvester + diff --git a/docs/content/manual/harvester-rancher/61-deploy-harvester-cloud-provider-to-k3s-cluster.md b/docs/content/manual/harvester-rancher/61-deploy-harvester-cloud-provider-to-k3s-cluster.md new file mode 100644 index 000000000..c8a00453b --- /dev/null +++ b/docs/content/manual/harvester-rancher/61-deploy-harvester-cloud-provider-to-k3s-cluster.md @@ -0,0 +1,39 @@ +--- +title: 61-Deploy Harvester cloud provider to k3s Cluster +--- + +* Related task: [#1812](https://github.com/harvester/harvester/issues/1812) K3s cloud provider and csi driver support + +### Environment Setup +1. Docker install rancher v2.6.4 +1. Create one node harvester with enough resource + +### Verify steps +Follow step **1~13** in tets plan `59-Create K3s Kubernetes Cluster` + +1. Click the Edit yaml button + ![image](https://user-images.githubusercontent.com/29251855/166190410-47331a84-1d4e-4478-9d85-e68a3da91626.png) +1. Set `disable-cloud-provider: true` to disable default k3s cloud provider. + ![image](https://user-images.githubusercontent.com/29251855/158510820-4d8a0021-1675-4c92-86b9-a6427f2e382b.png) +1. Add `cloud-provider=external` to use harvester cloud provider. + ![image](https://user-images.githubusercontent.com/29251855/158511002-47a4a532-7f67-4eb0-8da4-074c6d9752e9.png) +1. Create K3s cluster +![image](https://user-images.githubusercontent.com/29251855/158511706-1c0c6af5-8909-4b1d-bc2a-0fa2fa26e000.png) +1. Download the [Generate addon configuration](https://github.com/harvester/cloud-provider-harvester/blob/master/deploy/generate_addon.sh) for cloud provider +1. Download Harvester kubeconfig and add into your local ~/.kube/config file +1. Generate K3s kubeconfig by running generate addon script + ` ./deploy/generate_addon.sh ` + e.g `./generate_addon.sh k3s-focal-cloud-provider default` +1. Copy the kubeconfig content +1. ssh to K3s VM + ![image](https://user-images.githubusercontent.com/29251855/158534901-8fd22159-6a04-4592-ba25-ba4d73742a20.png) +1. Add kubeconfig content to `/etc/kubernetes/cloud-config` file, remember to align the yaml layout +1. Install Harvester cloud provider + ![image](https://user-images.githubusercontent.com/29251855/158512528-42ff575a-87a6-4424-bfb5-fa7af94ea74d.png) + ![image](https://user-images.githubusercontent.com/29251855/158512667-18b0249c-f859-4ae4-96b7-42ce873cb97a.png) + + +## Expected Results +1. Can install the Harvester cloud provider on k3s cluster correctly + ![image](https://user-images.githubusercontent.com/29251855/158512758-d06df2f6-7094-4d41-b960-d50b26cd23fb.png) + diff --git a/docs/content/manual/harvester-rancher/62-configure-k3s-dhcp-loadbalancer.md b/docs/content/manual/harvester-rancher/62-configure-k3s-dhcp-loadbalancer.md new file mode 100644 index 000000000..449909292 --- /dev/null +++ b/docs/content/manual/harvester-rancher/62-configure-k3s-dhcp-loadbalancer.md @@ -0,0 +1,46 @@ +--- +title: 62-Configure the K3s "DHCP" LoadBalancer service +--- +Prerequisite: +Already provision K3s cluster and cloud provider on test plan +* 59-Create K3s Kubernetes Cluster +* 61-Deploy Harvester cloud provider to k3s Cluster + +#### Create Nginx workload for testing +1. Create a test-nginx deployment with image nginx:latest. + ![image](https://user-images.githubusercontent.com/29251855/158512919-a35a079a-aa75-4ce8-bac6-a79438a2e112.png) +1. Add pod label test: test. + ![image](https://user-images.githubusercontent.com/29251855/158513017-5afc909a-662a-4f4e-b867-2555241a2cbd.png) + ![image](https://user-images.githubusercontent.com/29251855/158513105-09ab472b-7cd4-4352-b4e1-84f673ee7088.png) + +#### Create a DHCP LoadBalancer +1. Open Kubectl shell. +1. Create `test-dhcp-lb.yaml` file. + ``` + apiVersion: v1 + kind: Service + metadata: + annotations: + cloudprovider.harvesterhci.io/ipam: dhcp + name: test-dhcp-lb + namespace: default + spec: + ports: + - name: http + nodePort: 30172 + port: 8080 + protocol: TCP + targetPort: 80 + selector: + test: test + sessionAffinity: None + type: LoadBalancer + ``` +1. Run `k apply -f test-dhcp-lb.yaml` to apply it. + ![image](https://user-images.githubusercontent.com/29251855/158513659-3e0c487b-c819-492c-8c62-62fc644fd858.png) +1. The Pool LoadBalancer should get an IP from vip-pool and work. + ![image](https://user-images.githubusercontent.com/29251855/158513800-70d7c0ba-5a4b-4462-90df-c8d05b5a389d.png) + +## Expected Results +1. Can create load balance service correctly +1. Can route workload to nginx deployment \ No newline at end of file diff --git a/docs/content/manual/harvester-rancher/63-configure-k3s-pool-loadbalancer.md b/docs/content/manual/harvester-rancher/63-configure-k3s-pool-loadbalancer.md new file mode 100644 index 000000000..e47ca92a5 --- /dev/null +++ b/docs/content/manual/harvester-rancher/63-configure-k3s-pool-loadbalancer.md @@ -0,0 +1,48 @@ +--- +title: 63-Configure the K3s "Pool" LoadBalancer service +--- +Prerequisite: +Already provision K3s cluster and cloud provider on test plan +* 59-Create K3s Kubernetes Cluster +* 61-Deploy Harvester cloud provider to k3s Cluster + +#### Create Nginx workload for testing +1. Create a test-nginx deployment with image nginx:latest. + ![image](https://user-images.githubusercontent.com/29251855/158512919-a35a079a-aa75-4ce8-bac6-a79438a2e112.png) +1. Add pod label test: test. + ![image](https://user-images.githubusercontent.com/29251855/158513017-5afc909a-662a-4f4e-b867-2555241a2cbd.png) + +#### Create a Pool LoadBalancer +1. Modify vip-pool in Harvester settings. + ![image](https://user-images.githubusercontent.com/29251855/158514040-bfcd9ff3-964a-4511-94d7-a497ef88848f.png) + +1. Open Kubectl shell. +1. Create `test-pool-lb.yaml` file. + ``` + apiVersion: v1 + kind: Service + metadata: + annotations: + cloudprovider.harvesterhci.io/ipam: pool + name: test-pool-lb + namespace: default + spec: + ports: + - name: http + nodePort: 32155 + port: 8080 + protocol: TCP + targetPort: 80 + selector: + test: test + sessionAffinity: None + type: LoadBalancer + ``` +1. Run `k apply -f test-pool-lb.yaml` to apply it. +1. The Pool LoadBalancer should get an IP from vip-pool and work. + + +## Expected Results +1. Can create `Pool` load balance service correctly +1. Can route workload to nginx deployment + ![image](https://user-images.githubusercontent.com/29251855/158514315-1b570f64-fe18-400e-acc4-56d03bc30e61.png) \ No newline at end of file diff --git a/docs/content/manual/harvester-rancher/64-configure-k3s-dhcp-LB-healcheck.md b/docs/content/manual/harvester-rancher/64-configure-k3s-dhcp-LB-healcheck.md new file mode 100644 index 000000000..5a361ecc9 --- /dev/null +++ b/docs/content/manual/harvester-rancher/64-configure-k3s-dhcp-LB-healcheck.md @@ -0,0 +1,19 @@ +--- +title: 62-Configure the K3s "DHCP" LoadBalancer service +--- +Prerequisite: +Already provision K3s cluster and cloud provider on test plan +* 59-Create K3s Kubernetes Cluster +* 61-Deploy Harvester cloud provider to k3s Cluster +* 62-Configure the K3s "DHCP" LoadBalancer service + +1. A `Working` DHCP load balancer service created on K3s cluster +1. Edit Load balancer config +1. Check the "Add-on Config" tabs +1. Configure `port`, `IPAM` and `health check` related setting on `Add-on Config` page +![image](https://user-images.githubusercontent.com/29251855/141245366-799057f1-2aa7-4d7a-90d2-5e11541ddbc3.png) + + +## Expected Results +1. Can create load balance service correctly +1. Can route workload to nginx deployment \ No newline at end of file diff --git a/docs/content/manual/harvester-rancher/65-configure-k3s-pool-LB-healthcheck.md b/docs/content/manual/harvester-rancher/65-configure-k3s-pool-LB-healthcheck.md new file mode 100644 index 000000000..8160164b0 --- /dev/null +++ b/docs/content/manual/harvester-rancher/65-configure-k3s-pool-LB-healthcheck.md @@ -0,0 +1,19 @@ +--- +title: 65-Configure the K3s "Pool" LoadBalancer health check +--- +Prerequisite: +Already provision K3s cluster and cloud provider on test plan +* 59-Create K3s Kubernetes Cluster +* 61-Deploy Harvester cloud provider to k3s Cluster +* 63-Configure the K3s "Pool" LoadBalancer service + +1. A `Working` DHCP load balancer service created on K3s cluster +1. Edit Load balancer config +1. Check the "Add-on Config" tabs +1. Configure `port`, `IPAM` and `health check` related setting on `Add-on Config` page +![image](https://user-images.githubusercontent.com/29251855/141245366-799057f1-2aa7-4d7a-90d2-5e11541ddbc3.png) + + +## Expected Results +1. Can create load balance service correctly +1. Can route workload to nginx deployment \ No newline at end of file diff --git a/docs/content/manual/harvester-rancher/66-deploy-harvester-csi-driver-to-k3s-cluster.md b/docs/content/manual/harvester-rancher/66-deploy-harvester-csi-driver-to-k3s-cluster.md new file mode 100644 index 000000000..8ce355d5d --- /dev/null +++ b/docs/content/manual/harvester-rancher/66-deploy-harvester-csi-driver-to-k3s-cluster.md @@ -0,0 +1,53 @@ +--- +title: 66-Deploy Harvester csi driver to k3s Cluster +--- + +* Related task: [#1812](https://github.com/harvester/harvester/issues/1812) K3s cloud provider and csi driver support + +### Environment Setup +1. Docker install rancher v2.6.4 +1. Create one node harvester with enough resource + +### Verify steps +Follow step **1~13** in tets plan `59-Create K3s Kubernetes Cluster` + +1. Download the [Generate addon configuration](https://github.com/harvester/harvester-csi-driver/blob/master/deploy/generate_addon.sh) for csi driver +1. ssh to harvester node 1 +1. Execute `cat /etc/rancher/rke2/rke2.yaml` +1. change the server value from https://127.0.0.1:6443 to your node1 IP +1. Copy the kubeconfig and add into your local ~/.kube/config file +1. Generate K3s kubeconfig by running generate addon script +` ./deploy/generate_addon.sh ` +e.g `./generate_addon.sh k3s-csi-cluster default` +1. Copy the ks3 kubeconfig content start with `cloud-provider-config:` between `addons_include:` +1. ssh to K3s VM + ![image](https://user-images.githubusercontent.com/29251855/158932912-38297b70-7546-4349-801f-6a4b8b973305.png) + +1. Add kubeconfig content to `/etc/kubernetes/cloud-config`file, remember to align the yaml layout +1. Install Harvester csi driver + ![image](https://user-images.githubusercontent.com/29251855/158550983-61cff655-66a6-4a49-96bd-6e208f4fc9d8.png) + ![image](https://user-images.githubusercontent.com/29251855/158551034-090cee3d-9bd6-425c-84d8-16f6a66a5c64.png) + ![image](https://user-images.githubusercontent.com/29251855/158933473-a0172a62-5f7c-4e68-860c-c9cb38275791.png) + ![image](https://user-images.githubusercontent.com/29251855/158933756-d4198111-ae05-4a7d-8ea8-d21e3f3d2b87.png) + + + + +## Expected Results +1. Can deploy K3s cluster on harvester with kubernetes version `v1.23.4+k3s1` + ![image](https://user-images.githubusercontent.com/29251855/158935474-9d6f1c37-ea59-485d-83f5-6f1b19ebfa98.png) + +1. Can correctly install harvester csi driver on K3s cluster + ![image](https://user-images.githubusercontent.com/29251855/158933473-a0172a62-5f7c-4e68-860c-c9cb38275791.png) + ![image](https://user-images.githubusercontent.com/29251855/158933756-d4198111-ae05-4a7d-8ea8-d21e3f3d2b87.png) + +1. Can deploy nginx service with new PVC created + ![image](https://user-images.githubusercontent.com/29251855/158934469-6b050d39-a45c-493d-9e29-89e16c1cf23d.png) + ![image](https://user-images.githubusercontent.com/29251855/158934499-6c7d9525-f43a-4426-8786-1e4aec099964.png) + ![image](https://user-images.githubusercontent.com/29251855/158934531-9c42b704-67cc-4c91-b24e-fa29f7183f05.png) +1. Can allocate size in nginx container + ![image](https://user-images.githubusercontent.com/29251855/158934986-c08ddccc-3b33-4508-9861-6d10c5ded3c5.png) +1. Can resize and delete volume with harvester storage class + ![image](https://user-images.githubusercontent.com/29251855/158935263-a8b6fa9e-1f4b-43ae-a687-6df3e34986a1.png) + ![image](https://user-images.githubusercontent.com/29251855/158935288-67409de3-d5d5-4c9d-af04-421f4153658b.png) + diff --git a/docs/content/manual/harvester-rancher/67-harvester-persistent-volume-on-k3s-cluster.md b/docs/content/manual/harvester-rancher/67-harvester-persistent-volume-on-k3s-cluster.md new file mode 100644 index 000000000..254bf5e7f --- /dev/null +++ b/docs/content/manual/harvester-rancher/67-harvester-persistent-volume-on-k3s-cluster.md @@ -0,0 +1,57 @@ +--- +title: 67-Harvester persistent volume on k3s Cluster +--- + +* Related task: [#1812](https://github.com/harvester/harvester/issues/1812) K3s cloud provider and csi driver support + +### Environment Setup +1. Docker install rancher v2.6.4 +1. Create one node harvester with enough resource + +### Verify steps +Follow step **1~6** in tets plan `66-Deploy Harvester csi driver to k3s Cluster` + +#### Create Nginx workload for testing +1. Create a nginx-csi deployment with image nginx:latest. + ![image](https://user-images.githubusercontent.com/29251855/158934043-11d94957-5b85-469e-bdf7-5658edfec5d9.png) + + +1. Create a new PVC in storage tab: + ![image](https://user-images.githubusercontent.com/29251855/158934170-8cb212e7-c84c-48d4-a15d-b921a9e6a8fc.png) + +1. Complete the nginx deployment, create related PV in Harvester volume + ![image](https://user-images.githubusercontent.com/29251855/158934469-6b050d39-a45c-493d-9e29-89e16c1cf23d.png) + ![image](https://user-images.githubusercontent.com/29251855/158934499-6c7d9525-f43a-4426-8786-1e4aec099964.png) + ![image](https://user-images.githubusercontent.com/29251855/158934531-9c42b704-67cc-4c91-b24e-fa29f7183f05.png) + + +1. Click Execute shell to access Nginx container. +1. Run `dd if=/dev/zero of=/test/tmpfile bs=1M count=512` +1. Run `dd if=/dev/null of=/test/tmpfile bs=1M count=512` + ![image](https://user-images.githubusercontent.com/29251855/158934986-c08ddccc-3b33-4508-9861-6d10c5ded3c5.png) + +1. Delete the Nginx deployment. + +#### Resize and delete volume with harvester storage class +1. Change to Harvester dashboard. +1. Click the Edit config for the volume. +1. Change volume size. We can see the volume is in Resizing status. +![image](https://user-images.githubusercontent.com/29251855/158935263-a8b6fa9e-1f4b-43ae-a687-6df3e34986a1.png) +![image](https://user-images.githubusercontent.com/29251855/158935288-67409de3-d5d5-4c9d-af04-421f4153658b.png) +1. Delete PVC in k3s dashboard. +1. Related volume should be deleted in Harvester too. + + + + +## Expected Results +1. Can deploy nginx service with new PVC created + ![image](https://user-images.githubusercontent.com/29251855/158934469-6b050d39-a45c-493d-9e29-89e16c1cf23d.png) + ![image](https://user-images.githubusercontent.com/29251855/158934499-6c7d9525-f43a-4426-8786-1e4aec099964.png) + ![image](https://user-images.githubusercontent.com/29251855/158934531-9c42b704-67cc-4c91-b24e-fa29f7183f05.png) +1. Can allocate size in nginx container + ![image](https://user-images.githubusercontent.com/29251855/158934986-c08ddccc-3b33-4508-9861-6d10c5ded3c5.png) +1. Can resize and delete volume with harvester storage class + ![image](https://user-images.githubusercontent.com/29251855/158935263-a8b6fa9e-1f4b-43ae-a687-6df3e34986a1.png) + ![image](https://user-images.githubusercontent.com/29251855/158935288-67409de3-d5d5-4c9d-af04-421f4153658b.png) + diff --git a/docs/content/manual/harvester-rancher/68-Fully-airgapped-rancher-integrate-harvester-no-proxy.md b/docs/content/manual/harvester-rancher/68-Fully-airgapped-rancher-integrate-harvester-no-proxy.md new file mode 100644 index 000000000..e3be7242d --- /dev/null +++ b/docs/content/manual/harvester-rancher/68-Fully-airgapped-rancher-integrate-harvester-no-proxy.md @@ -0,0 +1,530 @@ +--- +title: 68-Fully airgapped rancher integrate with harvester with no proxy +--- + +* Related task: [#1808](https://github.com/harvester/harvester/issues/1808) RKE2 provisioning fails when Rancher has no internet access (air-gapped) + +* **Note1**: in fully air gapped environment, you have to setup private docker hub registry and pull all rancher related offline image +* **Note2**: Please use SUSE SLES JeOS image, it have `qemu-guest-agent` already installed, thus the guest VM can get IP correctly + +### Environment Setup + +Setup the airgapped harvester +1. Fetch ipxe vagrant example with new offline feature +https://github.com/harvester/ipxe-examples/pull/32 +1. Edit the setting.xml file +1. Set offline: `true` +1. Use ipxe vagrant example to setup a 3 nodes cluster +1. Enable vlan on `harvester-mgmt` +1. Now harvester dashboard page will out of work +1. Create virtual machine with name `vlan1` and id: `1` +1. Create ubuntu cloud image from File + +#### Phase 1: setup airgap Harvester +1. Set offline: true in [vagrant-pxe-harvester](https://github.com/harvester/ipxe-examples/blob/f75483563f192090dadd48051bbc3b538c30cd34/vagrant-pxe-harvester/settings.yml#L42-L44). +1. Set 1 node harvester with sufficient resources +1. Run ./setup_harvester.sh. + +#### Phase 2: Create a virtual machine for Rancher in air gapped environment +1. Create a virtual machine with at least 300GB storage (OS: Ubuntu Desktop 20.04) +1. Add `harvester` and `vagrant-libvirt` network to the virtual machine + - `harvester` for internal + - `vagrant-libvirt` for external + +#### Phase 3: Setup a private docker registry and download all the offline images for Rancher +1. Install VIM + ``` + sudo apt update + sudo apt install vim + ``` +1. Install Docker + ``` + sudo apt-get update + sudo apt-get install \ + ca-certificates \ + curl \ + gnupg \ + lsb-release + curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg + echo \ + "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ + $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null + sudo apt-get update + sudo apt-get install docker-ce docker-ce-cli containerd.io + ``` +1. Add current user to docker group + ``` + sudo groupadd docker + sudo usermod -aG docker $USER + ``` +1. Logout and login again. +1. Install helm + ``` + curl https://baltocdn.com/helm/signing.asc | sudo apt-key add - + sudo apt-get install apt-transport-https --yes + echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list + sudo apt-get update + sudo apt-get install helm + ``` +1. Create certs folder + ``` + mkdir -p certs + ``` +1. Generate private registry certificate files + ``` + openssl req \ + -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \ + -addext "subjectAltName = DNS:myregistry.test" \ + -x509 -days 365 -out certs/domain.crt + ``` +1. Move certificate files + ``` + sudo mkdir -p /etc/docker/certs.d/myregistry.test:5000 + sudo cp certs/domain.crt /etc/docker/certs.d/myregistry.test:5000/domain.crt + ``` +1. Start docker registry + ``` + docker run -d \ + -p 5000:5000 \ + --restart=always \ + --name registry \ + -v "$(pwd)"/certs:/certs \ + -v "$(pwd)"/registry:/var/lib/registry \ + -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ + -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ + registry:2 + ``` +1. Add myregistry.test record. Remember to change your private ip. + ``` + # vim /etc/hosts + 192.168.0.50 myregistry.test + ``` +1. Create `get-rancher-scripts` script. + ``` + # vim get-rancher-scripts + #!/bin/bash + if [[ $# -eq 0 ]] ; then + echo 'This requires you to pass a version for the url like "v2.6.3"' + exit 1 + fi + wget https://github.com/rancher/rancher/releases/download/$1/rancher-images.txt + wget https://github.com/rancher/rancher/releases/download/$1/rancher-load-images.sh + wget https://github.com/rancher/rancher/releases/download/$1/rancher-save-images.sh + chmod +x ./rancher-save-images.sh + chmod +x ./rancher-load-images.sh + ``` +1. Make `get-rancher-scripts` can be executed. + ``` + chmod +x get-rancher-scripts + ``` +1. Get rancher-images.txt. + ``` + ./get-rancher-scripts v2.6.4-rc13 + ``` +1. Add cert-manager images to rancher-images.txt + ``` + helm repo add jetstack https://charts.jetstack.io/ + helm repo update + helm fetch jetstack/cert-manager --version v1.7.1 + helm template ./cert-manager-v1.7.1.tgz | awk '$1 ~ /image:/ {print $2}' | sed s/\"//g >> ./rancher-images.txt + ``` +1. Sort rancher-images.txt + ``` + sort -u rancher-images.txt -o rancher-images.txt + ``` +1. Get Rancher images. This step may take at least 2 hours depends on your network speed + ``` + ./rancher-save-images.sh --image-list ./rancher-images.txt + ``` + ![image](https://user-images.githubusercontent.com/29251855/160348531-f6f5dd3b-af01-43e2-81ae-ab5b47719d7f.png) + +#### Phase 4: Push the image to docker registry +1. Download k3s v1.23.4+k3s1 + ``` + wget https://github.com/k3s-io/k3s/releases/download/v1.23.4%2Bk3s1/k3s-airgap-images-amd64.tar + wget https://github.com/k3s-io/k3s/releases/download/v1.23.4%2Bk3s1/k3s + chmod +x k3s + + sudo cp k3s /usr/local/bin/ + sudo chown $USER /usr/local/bin/k3s + + curl https://get.k3s.io/ -o install.sh + chmod +x install.sh + ``` +1. Download rancher v2.6,4-rc13 helm + ``` + helm repo add rancher-latest https://releases.rancher.com/server-charts/latest + helm fetch rancher-latest/rancher --version=v2.6.4-rc13 + ``` +1. Download cert-manager crds. + ``` + mkdir cert-manager + curl -L -o cert-manager/cert-manager-crd.yaml https://github.com/jetstack/cert-manager/releases/download/v1.7.1/cert-manager.crds.yaml + ``` + +1. Install k9s (option) + ``` + curl -kL https://github.com/derailed/k9s/releases/download/v0.25.18/k9s_Linux_x86_64.tar.gz > k9s.tar.gz + tar -zxvf k9s.tar.gz + mv k9s /usr/local/bin/ + ``` + +1. Cut Network -> Remove the `vagrant-libvirt` network device from virtual machine, make it air gapped +1. Load Rancher images to private registry. + ``` + ./rancher-load-images.sh --image-list ./rancher-images.txt --registry myregistry.test:5000 + ``` +- This step may take 50 minutes depends on total image size + +#### Phase 5: Install K3s and Rancher +1. Move k3s images files. + ``` + sudo mkdir -p /var/lib/rancher/k3s/agent/images/ + sudo cp ./k3s-airgap-images-amd64.tar /var/lib/rancher/k3s/agent/images/ + ``` +1. Install k3s + ``` + INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh + ``` +1. Add `registries.yaml` to `/etc/rancher/k3s/` + ``` + # vim /etc/rancher/k3s/registries.yaml + mirrors: + docker.io: + endpoint: + - "https://myregistry.test:5000/" + configs: + "myregistry.test:5000": + tls: + insecure_skip_verify: true + ``` +1. Restart k3s + ``` + sudo systemctl restart k3s.service + ``` +1. Copy kubeconfig to ~/.kube + ``` + mkdir ~/.kube + sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config + sudo chown $USER ~/.kube/config + export KUBECONFIG=~/.kube/config + ``` +1. Generate cert-manager YAML files. + ``` + helm template cert-manager ./cert-manager-v1.7.1.tgz --output-dir . \ + --namespace cert-manager \ + --set image.repository=myregistry.test:5000/quay.io/jetstack/cert-manager-controller \ + --set webhook.image.repository=myregistry.test:5000/quay.io/jetstack/cert-manager-webhook \ + --set cainjector.image.repository=myregistry.test:5000/quay.io/jetstack/cert-manager-cainjector \ + --set startupapicheck.image.repository=myregistry.test:5000/quay.io/jetstack/cert-manager-ctl + ``` +1. Install cert-manager. + ``` + kubectl create namespace cert-manager + kubectl apply -f cert-manager/cert-manager-crd.yaml + kubectl apply -R -f ./cert-manager + ``` +1. Create CA private key and certifacate file. + ``` + openssl genrsa -out cakey.pem 2048 + openssl req -x509 -sha256 -new -nodes -key cakey.pem -days 3650 -out cacerts.pem -subj "/CN=cattle-ca" + ``` +1. Create `openssl.cnf`. Remember to change `192.168.0.50` to your private ip. + ``` + [req] + req_extensions = v3_req + distinguished_name = req_distinguished_name + [req_distinguished_name] + [ v3_req ] + basicConstraints = CA:FALSE + keyUsage = nonRepudiation, digitalSignature, keyEncipherment + extendedKeyUsage = clientAuth, serverAuth + subjectAltName = @alt_names + [alt_names] + DNS.1 = myrancher.local + IP.1 = 192.168.0.50 + ``` +1. Generate private key and certificate file for `myrancher.local`. + ``` + openssl genrsa -out tls.key 2048 + openssl req -sha256 -new -key tls.key -out tls.csr -subj "/CN=myrancher.local" -config openssl.cnf + openssl x509 -sha256 -req -in tls.csr -CA cacerts.pem \ + -CAkey cakey.pem -CAcreateserial -out tls.crt \ + -days 3650 -extensions v3_req \ + -extfile openssl.cnf + ``` +1. Create `cattle-system` namespace. + ``` + kubectl create ns cattle-system + ``` +1. Create `tls.sa` secret. + ``` + kubectl -n cattle-system create secret generic tls-ca \ + --from-file=cacerts.pem=./cacerts.pem + ``` +1. Create `tls-rancher-ingress` secret. + ``` + kubectl -n cattle-system create secret tls tls-rancher-ingress \ + --cert=tls.crt \ + --ke y=tls.key + ``` +1. Generate Rancher YAML files. + ``` + helm template rancher ./rancher-2.6.4-rc13.tgz --output-dir . \ + --no-hooks \ + --namespace cattle-system \ + --set hostname=helm-install.local \ + --set rancherImageTag=v2.6.4-rc13 \ + --set rancherImage=myregistry.test:5000/rancher/rancher \ + --set systemDefaultRegistry=myregistry.test:5000 \ + --set useBundledSystemChart=true \ + --set ingress.tls.source=secret \ + --set privateCA=true + ``` +1. Install Rancher + ``` + kubectl -n cattle-system apply -R -f ./rancher + + davidtclin@davidtclin-Standard-PC-Q35-ICH9-2009:~$ kubectl -n cattle-system apply -R -f ./rancher + clusterrolebinding.rbac.authorization.k8s.io/rancher created + deployment.apps/rancher created + ingress.networking.k8s.io/rancher created + service/rancher created + serviceaccount/rancher created + + ``` + ![image](https://user-images.githubusercontent.com/29251855/160362486-4e521a71-9b31-4460-a4a0-958653dbd56d.png) + +#### Phase 6: Edit Rancher vm hosts file and boostrap Rancher +1. Edit the `/etc/hosts` file in Rancher virtual machine + ``` + 192.168.0.50 myregistry.test + ``` + +1. Edit the `/etc/hosts` file in Host machine + ``` + 192.168.0.50 helm-install.local + ``` + ![image](https://user-images.githubusercontent.com/29251855/160389038-40cf1803-4914-4dae-83a7-0a547872429f.png) + +1. Open browser on your host machine, access the URL https://helm-install.local/ + ![image](https://user-images.githubusercontent.com/29251855/160363090-3741cb41-a48e-4512-90ab-0ef2287e27ae.png) + +1. Copy the the helm command and run in the rancher virtual machine terminal + +1. Get the password and complete the bootstrap process + ![image](https://user-images.githubusercontent.com/29251855/160363211-980d17a8-8441-45ee-9af2-f4d000ac9c06.png) + +#### Phase 7: Edit Harvester cluster hosts files +1. Edit `/etc/hosts` on harvester node machine + ``` + # vim /etc/hosts + 192.168.0.50 myregistry.test helm-install.local + ``` + ![image](https://user-images.githubusercontent.com/29251855/160390326-d89b5ece-2cab-48bc-a1ee-35fd7f8cc648.png) + +1. Set `registries.yaml` in Harvester. /etc/rancher/rke2/registries.yaml + ``` + # vim /etc/rancher/rke2/registries.yaml + mirrors: + docker.io: + endpoint: + - "https://myregistry.test:5000/" + configs: + "myregistry.test:5000": + tls: + insecure_skip_verify: true + ``` +1. Restart Harvester + ``` + systemctl restart rke2-server.service + ``` + +#### Phase 8: Add host mapping to coredns configmap and deployments +1. Open K9s -> enter `: configmap` -> search dns + ![image](https://user-images.githubusercontent.com/29251855/160373835-c4aa114e-d07d-4ec6-8ff7-ba742008bf9a.png) +1. Edit the `rke2-coredns-rke2-coredns` +1. Add the following content and save + ``` + hosts /etc/coredns/customdomains.db helm-install.local {\n + \ fallthrough\n } + ``` + + ``` + customdomains.db: | + 192.168.0.50 helm-install.local + ``` + + ``` + data: + Corefile: ".:53 {\n errors \n health {\n lameduck 5s\n }\n ready + \n kubernetes cluster.local cluster.local in-addr.arpa ip6.arpa {\n pods + insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus + \ 0.0.0.0:9153\n hosts /etc/coredns/customdomains.db helm-install.local {\n + \ fallthrough\n }\n forward . /etc/resolv.conf\n cache 30\n loop + \n reload \n loadbalance \n}" + customdomains.db: | + 192.168.0.50 helm-install.local + ``` + +1. Open K9s -> enter `: deployment` -> search dns +1. Edit the rke2-coredns-rke2-coredns + ![image](https://user-images.githubusercontent.com/29251855/160374673-cadbf036-7a76-48c3-8029-306976f69ef0.png) +1. Add the following content and save + ``` + - key: customdomains.db + path: customdomains.db + ``` + ![image](https://user-images.githubusercontent.com/29251855/160374909-e8364442-9401-468f-8c3a-5f7ce9c1443b.png) + + +#### Phase 9: Import Harvester from Rancher UI +1. Environment preparation as above steps +1. Open Rancher virtualization management page and click import + ![image](https://user-images.githubusercontent.com/29251855/160376484-8d4503ab-32e3-4579-bb54-9f317763e795.png) + +1. Copy the registration URL and paste in the Harvester cluster URL settings + ![image](https://user-images.githubusercontent.com/29251855/160376541-8ad0a58b-e118-4748-9e2d-f0a99eeaca2a.png) + + +### Test steps + +1. Update coredns in k3s `sudo vim /var/lib/rancher/k3s/server/manifests/coredns.yaml`. + ``` + # update ConfigMap + Corefile: | + .:53 { + errors + health + ready + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + } + hosts /etc/coredns/customdomains.db { + fallthrough + } + prometheus :9153 + forward . /etc/resolv.conf + cache 30 + loop + reload + loadbalance + } + customdomains.db: | + 192.168.0.50 airgap helm-install.local + + # update deployment + # remove NodeHost key and path + # add customdomains.db + - key: customdomains.db + path: customdomains.db + ``` + +1. Import `SLES15-SP3-JeOS.x86_64-15.3-OpenStack-Cloud-GM.qcow2` to Harvester. +1. Create RKE2 cluster with following userData. + ``` + runcmd: + - - systemctl + - enable + - --now + - qemu-guest-agent + ``` + +1. (k3s with Rancher VM) Update coredns in `sudo vim /var/lib/rancher/k3s/server/manifests/coredns.yaml`. + ``` + # update ConfigMap + Corefile: | + .:53 { + errors + health + ready + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + } + hosts /etc/coredns/customdomains.db { + fallthrough + } + prometheus :9153 + forward . /etc/resolv.conf + cache 30 + loop + reload + loadbalance + } + customdomains.db: | + 192.168.0.50 airgap helm-install.local + + # update deployment + # remove NodeHost key and path + # add customdomains.db + - key: customdomains.db + path: customdomains.db + ``` +1. Import `SLES15-SP3-JeOS.x86_64-15.3-OpenStack-Cloud-GM.qcow2` to Harvester. +1. Create RKE2 cluster with following userData. + ``` + runcmd: + - - systemctl + - enable + - --now + - qemu-guest-agent + bootcmd: + - echo 192.168.0.50 helm-install.local myregistry.test >> /etc/hosts + ``` +1. (RKE2 VM) Create a file in `/etc/rancher/agent/tmp_registries.yaml`: + ``` + mirrors: + docker.io: + endpoint: + - "https://myregistry.test:5000" + configs: + "myregistry.test:5000": + tls: + insecure_skip_verify: true + ``` +1. (RKE2 VM) Update rancher-system-agent config file `/etc/rancher/agent/config.yaml`. + ``` + agentRegistriesFile: /etc/rancher/agent/tmp_registries.yaml + ``` +1. (RKE2 VM) Restart rancher-system-agent. + ``` + systemctl restart rancher-system-agent.service + ``` +1. (RKE2 VM) Create a file in `/etc/rancher/rke2/registries.yaml`: + ``` + mirrors: + docker.io: + endpoint: + - "https://myregistry.test:5000" + configs: + "myregistry.test:5000": + tls: + insecure_skip_verify: true + ``` +1. (RKE2 VM) Update ConfigMap `kube-system/rke2-coredns-rke2-coredns` in RKE2. + ``` + data: + Corefile: ".:53 {\n errors \n health {\n lameduck 5s\n }\n ready + \n kubernetes cluster.local cluster.local in-addr.arpa ip6.arpa {\n pods + insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus + \ 0.0.0.0:9153\n hosts /etc/coredns/customdomains.db helm-install.local {\n + \ fallthrough\n }\n forward . /etc/resolv.conf\n cache 30\n loop + \n reload \n loadbalance \n}" + customdomains.db: | + 192.168.0.50 helm-install.local + ``` +1. (RKE2 VM) Update Deployment `kube-system/rke2-coredns-rke2-coredns`. + ``` + # add following to volumes[].configMap + - key: customdomains.db + path: customdomains.db + ``` + +## Expected Results +1. Can import harvester from Rancher correctly +1. Can access downstream harvester cluster from Rancher dashboard +1. Can provision at least one node RKE2 cluster to harvester correctly with running status +1. Can explore provisioned RKE2 cluster nodes +1. RKE2 cluster VM created running correctly on harvester node \ No newline at end of file diff --git a/docs/content/manual/harvester-rancher/69-dhcp-loadbalancer-service-no-health-check.md b/docs/content/manual/harvester-rancher/69-dhcp-loadbalancer-service-no-health-check.md new file mode 100644 index 000000000..608b126cf --- /dev/null +++ b/docs/content/manual/harvester-rancher/69-dhcp-loadbalancer-service-no-health-check.md @@ -0,0 +1,36 @@ +--- +title: 69-DHCP Harvester LoadBalancer service no health check +--- +Prerequisite: +Already provision RKE1/RKE2 cluster in previous test case + +1. Open `Global Settings` in hamburger menu +1. Replace `ui-dashboard-index` to `https://releases.rancher.com/harvester-ui/dashboard/latest/index.html` +1. Change `ui-offline-preferred` to `Remote` +1. Refresh the current page (ctrl + r) +1. Open provisioned RKE2 cluster from hamburger menu +1. Drop down `Service Discovery` +1. Click `Services` +1. Click Create +1. Select `Load Balancer` + +![image.png](https://images.zenhubusercontent.com/61519853321ea20d65443929/f628094c-a195-4f99-9fb7-858d759dc019) + +1. Given service name to make the load balancer name composed of the cluster name, namespace, svc name, and suffix(8 characters) more than 63 characters +1. Provide Listening port and Target port + +![image.png](https://images.zenhubusercontent.com/61519853321ea20d65443929/2c20c759-4769-438b-94ad-5b995ba66873) + +1. Click `Add-on Config` +1. Select Health Check port +1. Select `dhcp` as IPAM mode + + +1. Create another load balancer service with the name characters. + +## Expected Results +- Can create the load balance service correctly + +![image.png](https://images.zenhubusercontent.com/61519853321ea20d65443929/4fbf9271-e3fa-4490-b1e9-8bb9c20060bf) + +- Can operate and forward workload as expected diff --git a/docs/content/manual/harvester-rancher/70-pool-loadbalancer-service-no-health-check.md b/docs/content/manual/harvester-rancher/70-pool-loadbalancer-service-no-health-check.md new file mode 100644 index 000000000..b3e6f416b --- /dev/null +++ b/docs/content/manual/harvester-rancher/70-pool-loadbalancer-service-no-health-check.md @@ -0,0 +1,32 @@ +--- +title: 70-Pool LoadBalancer service no health check +--- +Prerequisite: +Already provision RKE1/RKE2 cluster in previous test case + +1. Open `Global Settings` in hamburger menu +1. Replace `ui-dashboard-index` to `https://releases.rancher.com/harvester-ui/dashboard/latest/index.html` +1. Change `ui-offline-preferred` to `Remote` +1. Refresh the current page (ctrl + r) +1. Access Harvester dashboard UI +1. Go to Settings +1. Create a vip-pool in Harvester settings. + ![image](https://user-images.githubusercontent.com/29251855/158514040-bfcd9ff3-964a-4511-94d7-a497ef88848f.png) +1. Open provisioned RKE2 cluster from hamburger menu +1. Drop down `Service Discovery` +1. Click `Services` +1. Click Create +1. Select `Load Balancer` +![image.png](https://images.zenhubusercontent.com/61519853321ea20d65443929/f628094c-a195-4f99-9fb7-858d759dc019) + +1. Given service name +1. Provide Listening port and Target port + ![image.png](https://images.zenhubusercontent.com/61519853321ea20d65443929/2c20c759-4769-438b-94ad-5b995ba66873) + +1. Click `Add-on Config` +1. Provide Health Check port +1. Select `pool` as IPAM mode + +## Expected Results +1. Can create load balance service correctly +1. Can operate and route to deployed service correctly \ No newline at end of file