Skip to content

Commit f703b29

Browse files
committed
openyurt application delivery feature
Signed-off-by: huiwq1990 <[email protected]>
1 parent c298bb8 commit f703b29

3 files changed

+539
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,144 @@
1+
---
2+
title: Proposal Template
3+
authors:
4+
- "@huiwq1990"
5+
reviewers:
6+
- "@rambohe-ch"
7+
creation-date: 2022-06-11
8+
last-updated: 2022-06-11
9+
status: provisional
10+
---
11+
12+
# OpenYurt应用部署思考
13+
14+
## 部署场景
15+
16+
1)yurt-app-manager需要部署ingress-controller实例到每个nodepool
17+
18+
2)yurt-edgex-manager需要部署edgex实例到每个nodepool
19+
20+
## 当前方案
21+
22+
定义edgex、yurtingress crd,分别实现各自的controller,并协调资源创建。
23+
24+
## 当前问题
25+
26+
1)edge controller、ingress controller存在共性,需要部署实例到各个nodepool;
27+
28+
2)扩展性不够,无法支持更多资源类型,比如:将来要部署边缘网关、支持上层业务时,需要针对性开发新的controller;
29+
30+
## 问题思考
31+
32+
1)如果部署服务只是一个镜像,直接使用`yurtdaemonset`即可;
33+
34+
2)如果部署服务是多个资源,可以将资源封装为chart包,chart本身具备模板配置属性,同时方便部署。Chart部署使用fluxcd,或者argocd等cd系统解决。(备注:fluxcd的HelmRelease本质是执行helm install,它不能解决节点池问题)
35+
36+
通过`spec.chart`指定chart包,`spec.values`设置实例的values。
37+
38+
```yaml
39+
#https://fluxcd.io/docs/components/helm/helmreleases/
40+
apiVersion: helm.toolkit.fluxcd.io/v2beta1
41+
kind: HelmRelease
42+
metadata:
43+
name: backend
44+
namespace: default
45+
spec:
46+
interval: 5m
47+
chart:
48+
spec:
49+
chart: podinfo
50+
version: ">=4.0.0 <5.0.0"
51+
sourceRef:
52+
kind: HelmRepository
53+
name: podinfo
54+
namespace: default
55+
interval: 1m
56+
upgrade:
57+
remediation:
58+
remediateLastFailure: true
59+
test:
60+
enable: true
61+
values:
62+
service:
63+
grpcService: backend
64+
resources:
65+
requests:
66+
cpu: 100m
67+
memory: 64Mi
68+
```
69+
70+
3)进一步多个资源可以看作为应用整体,当前社区已经有OAM的方案,并有kubevela的实现。
71+
72+
- kubevela可以把chart作为应用组件,底层使用fluxcd部署;
73+
- kubevela的topology可以支持多集群部署
74+
75+
https://kubevela.io/docs/tutorials/helm-multi-cluster
76+
77+
```yaml
78+
apiVersion: core.oam.dev/v1beta1
79+
kind: Application
80+
metadata:
81+
name: helm-hello
82+
spec:
83+
components:
84+
- name: hello
85+
type: helm
86+
properties:
87+
repoType: "helm"
88+
url: "https://jhidalgo3.github.io/helm-charts/"
89+
chart: "hello-kubernetes-chart"
90+
version: "3.0.0"
91+
policies:
92+
- name: topology-local
93+
type: topology
94+
properties:
95+
clusters: ["local"]
96+
- name: topology-foo
97+
type: topology
98+
properties:
99+
clusters: ["foo"]
100+
- name: override-local
101+
type: override
102+
properties:
103+
components:
104+
- name: hello
105+
properties:
106+
values:
107+
configs:
108+
MESSAGE: Welcome to Control Plane Cluster!
109+
- name: override-foo
110+
type: override
111+
properties:
112+
components:
113+
- name: hello
114+
properties:
115+
values:
116+
configs:
117+
MESSAGE: Welcome to Your New Foo Cluster!
118+
workflow:
119+
steps:
120+
- name: deploy2local
121+
type: deploy
122+
properties:
123+
policies: ["topology-local", "override-local"]
124+
- name: manual-approval
125+
type: suspend
126+
- name: deploy2foo
127+
type: deploy
128+
properties:
129+
policies: ["topology-foo", "override-foo"]
130+
```
131+
132+
## 结论
133+
134+
基于上面的实现,应用部署到多个nodepool可以类比应用部署到多个集群,即可以参考kubevela的多集群模型,抽象openyurt的nodepool应用部署模型。
135+
136+
## 落地步骤
137+
138+
1)将edgex,ingresscontroller资源封装成chart包,同时helm install后实例间资源不能有重复的;
139+
140+
2)可以基于kubevela开发nodepool特性的Application;或者openyurt对标实现自己的Controller;
141+
142+
## 缺点
143+
144+
1)由于缺少对部署后资源的watch,当某个资源被更新或者删除,controller很难感知;

0 commit comments

Comments
 (0)