You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Today, in CNCC we store the capacity values as
integers:
type capacity struct {
IPv4 int `json:"ipv4,omitempty"`
IPv6 int `json:"ipv6,omitempty"`
IP int `json:"ip,omitempty"`
}
When capacity is full, CNCC sets the value to 0.
Also, depending on the platform it also ignores
setting fields it doesn't care about (example AWS
doesn't use IP, gcp and azure don't use IPv4 and IPv6).
However given we have omitempty set, this was omitting
the zero value in the annotation. When OVN-Kubernetes
reads this annotation it was then setting the capacity
to unlimited:
nodeEgressIPConfig := []nodeEgressIPConfiguration{
{
Capacity: Capacity{
IP: UnlimitedNodeCapacity,
IPv4: UnlimitedNodeCapacity, --> we set this to maxint32
IPv6: UnlimitedNodeCapacity,
},
},
}
which is causing all EgressIPs to be
assigned to this node leading to:
status:
conditions:
- lastTransitionTime: "2025-10-06T19:24:24Z"
message: "Error processing cloud assignment request, err: PrivateIpAddressLimitExceeded:
Number of private addresses will exceed limit.\n\tstatus code: 400, request
id: 457f4332-e9c4-44c9-bfcf-deeb5e7e43ce"
In this fix, what we really want is to remove omitempty
so that the zero capacity gets reflected correctly, however
doing so also means fields that are unset will also be zero
which can lead to confusion. Basically we are not able to
distinguish between unset field and 0 value fields.
Hence we are changing the capacity struct to be pointer type
values so that null/nil means unset and 0 means full capacity.
We still keep the omitempty since we don't need to do anything
with unset fields - there is no behaviour change there and
OVN-Kubernetes will continue to treat that as unlimited
capacity.
Upgrades: CNCC upon reboot seems to call:
func (n *NodeController) SyncHandler(key string) error {
....
// Filter out cloudPrivateIPConfigs assigned to node (key) and write the entry
// into same slice starting from index 0, finally chop off unwanted entries
// when passing it into GetNodeEgressIPConfiguration.
index := 0
for _, cloudPrivateIPConfig := range cloudPrivateIPConfigs {
if isAssignedCloudPrivateIPConfigOnNode(cloudPrivateIPConfig, key) {
cloudPrivateIPConfigs[index] = cloudPrivateIPConfig
index++
}
}
nodeEgressIPConfigs, err := n.cloudProviderClient.GetNodeEgressIPConfiguration(node, cloudPrivateIPConfigs[:index])
if err != nil {
return fmt.Errorf("error retrieving the private IP configuration for node: %s, err: %v", node.Name, err)
}
return n.SetNodeEgressIPConfigAnnotation(node, nodeEgressIPConfigs)
}
// SetCloudPrivateIPConfigAnnotationOnNode annotates the corev1.Node with the cloud subnet information and capacity
func (n *NodeController) SetNodeEgressIPConfigAnnotation(node *corev1.Node, nodeEgressIPConfigs []*cloudprovider.NodeEgressIPConfiguration) error {
annotation, err := n.generateAnnotation(nodeEgressIPConfigs)
if err != nil {
return err
}
klog.Infof("Setting annotation: '%s: %s' on node: %s", nodeEgressIPConfigAnnotationKey, annotation, node.Name)
return retry.RetryOnConflict(retry.DefaultRetry, func() error {
ctx, cancel := context.WithTimeout(n.ctx, controller.ClientTimeout)
defer cancel()
// See: updateCloudPrivateIPConfigStatus
nodeLatest, err := n.kubeClient.CoreV1().Nodes().Get(ctx, node.Name, metav1.GetOptions{})
if err != nil {
return err
}
existingAnnotations := nodeLatest.Annotations
existingAnnotations[nodeEgressIPConfigAnnotationKey] = annotation
nodeLatest.SetAnnotations(existingAnnotations)
_, err = n.kubeClient.CoreV1().Nodes().Update(ctx, nodeLatest, metav1.UpdateOptions{})
return err
})
}
and we seem to be overwriting the annotation - so we should be good on upgrades
in changing from older annotations to new annotations - where 0 valued fields
will appear for full capacity nodes.
Once that happens, OVN-Kubernetes should overrite the UnlimitedValue to value 0
tat indicates 0 capacity and we should enter:
if eNode.egressIPConfig.Capacity.IP < util.UnlimitedNodeCapacity {
if eNode.egressIPConfig.Capacity.IP-len(eNode.allocations) <= 0 {
klog.V(5).Infof("Additional allocation on Node: %s exhausts it's IP capacity, trying another node", eNode.name)
continue
}
}
if eNode.egressIPConfig.Capacity.IPv4 < util.UnlimitedNodeCapacity && utilnet.IsIPv4(eIP) {
if eNode.egressIPConfig.Capacity.IPv4-getIPFamilyAllocationCount(eNode.allocations, false) <= 0 {
klog.V(5).Infof("Additional allocation on Node: %s exhausts it's IPv4 capacity, trying another node", eNode.name)
continue
}
}
if eNode.egressIPConfig.Capacity.IPv6 < util.UnlimitedNodeCapacity && utilnet.IsIPv6(eIP) {
if eNode.egressIPConfig.Capacity.IPv6-getIPFamilyAllocationCount(eNode.allocations, true) <= 0 {
klog.V(5).Infof("Additional allocation on Node: %s exhausts it's IPv6 capacity, trying another node", eNode.name)
continue
}
}
these desired conditions correctly.
Conflicts in pkg/cloudprovider/azure.go
Signed-off-by: Surya Seetharaman <[email protected]>
(cherry picked from commit 66c4f5d)
0 commit comments