|
| 1 | +## Background |
| 2 | + |
| 3 | +Currently, kmesh has implemented traffic governance functions for L4 and L7 through XDS protocol. However, in some scenarios, microservice applications focus more on L4 traffic governance, and L7 governance can be deployed as needed. The Istio community has launched a Workload model to provide lightweight L4 traffic governance functions, which Kmesh needs to consider supporting. |
| 4 | + |
| 5 | +Complete Workload Model reference link:https://pkg.go.dev/istio.io/istio/pkg/workloadapi |
| 6 | + |
| 7 | +## Workload fields related to L4 traffic governance |
| 8 | + |
| 9 | +### Address |
| 10 | + |
| 11 | +```go// |
| 12 | +type Address struct { |
| 13 | + // Types that are assignable to Type: |
| 14 | + // *Address_Workload |
| 15 | + // *Address_Service |
| 16 | + Type isAddress_Type `protobuf_oneof:"type"` |
| 17 | +} |
| 18 | +``` |
| 19 | + |
| 20 | +### Service |
| 21 | + |
| 22 | +```go// |
| 23 | +type Service struct { |
| 24 | + // The service name for Kubernetes, such as: "fortio-server", "Kubernetes", "istiod", "kube-dns" etc. |
| 25 | + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` |
| 26 | + // The namespace the service belongs to,such as: "default", "kube-system", "istio-system" etc. |
| 27 | + Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"` |
| 28 | + // Hostname represents the FQDN of the service. |
| 29 | + // For Kubernetes,this would be <name>.<namespace>.svc.<cluster domain>, such as: "fortio-server.default.svc.cluster.local", "istiod.istio-system.svc.cluster.local" etc. |
| 30 | + Hostname string `protobuf:"bytes,3,opt,name=hostname,proto3" json:"hostname,omitempty"` |
| 31 | + // Address represents the addresses the service can be reached at. |
| 32 | + // There may be multiple addresses for a single service if it resides in multiple networks, |
| 33 | + // multiple clusters, and/or if it's dual stack (TODO: support dual stack). |
| 34 | + // For a headless kubernetes service, this list will be empty. |
| 35 | + Addresses []*NetworkAddress `protobuf:"bytes,4,rep,name=addresses,proto3" json:"addresses,omitempty"` |
| 36 | + // Ports for the service. |
| 37 | + // The target_port may be overridden on a per-workload basis. |
| 38 | + Ports []*Port `protobuf:"bytes,5,rep,name=ports,proto3" json:"ports,omitempty"` |
| 39 | +} |
| 40 | +``` |
| 41 | + |
| 42 | +### Workload |
| 43 | + |
| 44 | +```go |
| 45 | +type Workload struct { |
| 46 | + // UID represents a globally unique opaque identifier for this workload, such as: "Kubernetes//Pod/default/fortio-server-deployment-59f95d774d-85nr4" |
| 47 | + Uid string `protobuf:"bytes,20,opt,name=uid,proto3" json:"uid,omitempty"` |
| 48 | + // Name represents the name for the workload, For Kubernetes, this is the pod name, such as: "fortio-server-deployment-59f95d774d-ljmd5" |
| 49 | + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` |
| 50 | + // Namespace represents the namespace for the workload. |
| 51 | + Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"` |
| 52 | + // Address represents the IPv4/IPv6 address for the workload, this should be globally unique. |
| 53 | + Addresses [][]byte `protobuf:"bytes,3,rep,name=addresses,proto3" json:"addresses,omitempty"` |
| 54 | + // The services for which this workload is an endpoint. the key is the NamespacedHostname string of the format namespace/hostname. |
| 55 | + Services map[string]*PortList `protobuf:"bytes,22,rep,name=services,proto3" json:"services,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` |
| 56 | + // Health status of the workload,such as: "Healthy","Unhealthy" |
| 57 | + Status WorkloadStatus `protobuf:"varint,17,opt,name=status,proto3,enum=istio.workload.WorkloadStatus" json:"status,omitempty"` |
| 58 | +} |
| 59 | +``` |
| 60 | + |
| 61 | +Note: The above configuration about workload model is related to the basic network function and does not involve TLS or encryption related configurations. This part of the content will be added in the process of supplementing the complete functionality of the workload. |
| 62 | + |
| 63 | +## How kmesh subscribe and use the workload model info |
| 64 | + |
| 65 | +1. kmesh's xds client subscribes to the workload model from Istiod through Delta manner, and the type_url is: "type.googleapis.com/istio.workload.Address"; |
| 66 | +2. The workload data of Address type is divided into two sub resources: Service and Workload, which are responsed to kmesh and then parsed and converted into internal structures, stored in the bpf map; |
| 67 | + |
| 68 | + |
| 69 | + |
| 70 | +In the subsequent traffic management of kmesh, based on the IP and Port accessed by the client, find the corresponding service and its endpoints from the BPF map, and then randomly select an endpoint to forward the request to that endpoint; |
| 71 | + |
| 72 | +## Kmesh BPF map data structure definition |
| 73 | + |
| 74 | +```C |
| 75 | +// frontend map |
| 76 | +typedef struct |
| 77 | +{ |
| 78 | + __be32 ipv4; // service ip |
| 79 | + __be16 service_port; |
| 80 | +} __attribute__((packed)) frontend_key; |
| 81 | + |
| 82 | +typedef struct |
| 83 | +{ |
| 84 | + __u32 service_id; // service id, through <namespace>/<hostname> string convert to uint32 variable |
| 85 | +} __attribute__((packed)) frontend_value; |
| 86 | + |
| 87 | +// service map |
| 88 | +typedef struct |
| 89 | +{ |
| 90 | + __u32 service_id; // service id, through <namespace>/<hostname> string convert to uint32 variable |
| 91 | +} __attribute__((packed)) service_key; |
| 92 | + |
| 93 | +typedef struct |
| 94 | +{ |
| 95 | + __u32 endpoint_count; // the endpoint count of the service |
| 96 | + __u32 lb_policy; // current only support random lb policy |
| 97 | +} __attribute__((packed)) service_value; |
| 98 | + |
| 99 | +// endpoint map |
| 100 | +typedef struct |
| 101 | +{ |
| 102 | + __u32 service_id; // service id, through <namespace>/<hostname> string convert to uint32 variable |
| 103 | + __u32 backend_index; // backend index,The relationship of backend_index and endpoint_count:if endpoint_count is 3,then backend_index can be 1/2/3; |
| 104 | +} __attribute__((packed)) endpoint_key; |
| 105 | + |
| 106 | +typedef struct |
| 107 | +{ |
| 108 | + __u32 backend_uid; // backend uid, through workload_uid string convert to uint32 variable |
| 109 | +} __attribute__((packed)) endpoint_value; |
| 110 | + |
| 111 | +// backend map |
| 112 | +typedef struct |
| 113 | +{ |
| 114 | + __u32 backend_uid; // backend uid, through workload_uid string convert to uint32 variable |
| 115 | +} __attribute__((packed)) backend_key; |
| 116 | + |
| 117 | +typedef struct |
| 118 | +{ |
| 119 | + __be32 ipv4; // backend ip |
| 120 | + __u32 port_count; |
| 121 | + __u32 service_port[MAX_COUNT]; // MAX_ COUNT fixed at 10, currently |
| 122 | + __u32 target_port[MAX_COUNT]; |
| 123 | +} __attribute__((packed)) backend_value; |
| 124 | + |
| 125 | +``` |
| 126 | + |
| 127 | +## Subscription data processing flow |
| 128 | + |
| 129 | + |
| 130 | + |
| 131 | + |
| 132 | + |
| 133 | + |
| 134 | +## Traffic governance process |
| 135 | + |
| 136 | + |
| 137 | + |
| 138 | +* Client Access Service: Search the serviceinfo map based on the IP and Port accessed by the client, find the corresponding service_id, and then search the service map based on the service_id to find the endpoint_count of the backend Pod in the service. Then, search the endpoint map based on the service_id and the random backend_index generated based on the count to find the corresponding backend_uid. Finally, use the backenduid to find the IP and Port of the backend. |
| 139 | +* Client access Pod: Access directly through Pod's IP and Port. |
| 140 | + |
| 141 | + |
0 commit comments