-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Equinix Metal - Add abbility to deploy servers with Layer 2/Hybrid networking #613
Comments
/kind feature |
Hi, I'd like to know what will be next step in the scenario of L2 interface with 1 or multiple VLANs. Consider scenario where instance is created and networking changed to L2 as was proposed. Then you need to do 2 things:
The first can be configured inside the The second issue is that when creating the network config you need to pick an IP address for the interface (to be useful). This is not very easy to automate, as the IP address is different for each server and must be populated by some IPAM in the background to do properly. Alternative is to have a DHCP server. This is not available by Equinix services (AFAIK) which makes the only solution to spawn a "support" instance that would run DHCP (fairly expensive only for having DHCP). So here my question is - is there any plan to have a DHCP server functionality in Equinix Metal? Then I can imagine, we would be able to do something meaningful without having to SSH to the server or use SOS and to have a full automation experience. I'd like to hear more opinions or corrections of my view in case I missed anything. |
Added an issue that will be relevant to this: |
Linking to a relevant request for Layer2 support in the Autoscaler project: While CAPP may not be directly related to that, any CAPP changes that depend on CPEM support for Layer2 supporting spec or annotations should be consistent with the Autoscaler experience. (a common set of Equinix Metal, annotations that mean the same thing across kubernetes-sigs controllers, all consumers of the spec.ProviderID equinixmetal:// name space) |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
hi @displague as part of phase 1, here is what I was thinking before jumping on the design doc.
The CAPP Operators while provisioning the metal node would run the below script as a part of user_data:
(To make the networking configuration permanent and survive server reboots, we'll add the new subinterface to the /etc/network/interfaces file. This would be for the Host OS , for containers / kubernetes I think I'd like to hear more opinions or corrections of my view in case I missed anything. |
@rahulii https://deploy.equinix.com/developers/docs/metal/layer2-networking/hybrid-unbonded-mode/ for example would have While I don't think we need all of these modes to be fleshed-out in CAPP's initial Layer2/Hybrid capabilities, it's worth considering how the configuration patterns in the initial release can stand up to known future intention. |
#226 (comment)
what does |
@rahulii this is a good observation. You are right that (a) for each VLAN defined there should be some IP definitions. The initial intent was to capture (b) IP reservations which may either be elastic public addresses or addresses from a VRF block. VRF definitions are an Equinix Metal networking feature newer than many of the recommendations in that comment thread. VRF addresses are VLAN scoped. Elastic IP addresses are Machine scoped and must be bound to eth0 or bond0 in layer3 or hybrid modes. In the absence of VRF, a Layer2 port may use any addresses. VRF adds forwarding and a carve-out awareness to the routing infrastructure, which is not present without VRF. I think, for that example snippet, this leads to - type: vrf # statically assigned addresses, carved out of a VRF
# vrf_id: optional. cidr_notation will be created within designated VRF. Conflicts (or must match) with cidr, vxlan, vlan_id since these are inferred/known given a vrf_id
cidr_notation: "1.2.3.4/24" # the range to be used by the cluster
cidr: 22 # the size of the VRF to create (from which cidr_notation will be reserved)
vxlan: 1234 # VLAN to create (or perhaps lookup and join?)
# vlan_id: optional alternative uuid of existing vlan. Conflicts or target VLAN at id must match vxlan. |
@displague got it.
Now, CAPP needs to assign the port to VLAN 1224 and pick one ip address from correct me if I am wrong. |
@rahulii correct. There is no state for IP assignments on the Equinix Metal API side. Management of which IP addresses are assigned to which nodes will need to be handled in CAPP or an IPAM Provider. I may be approaching this naively, but a ConfigMap may be a way to persist and maintain those assignments. I don't know if that would create a conflict in terms of the owner of the configmap between CAPP and an IPAM provider. Alternatively, the spec and status of the Cluster resource could capture these details. IPAM providers may provide a better way to solve this that doesn't rely on ConfigMaps or modifying the Cluster resource. In terms of how IP addresses are assigned to either ports or vlans, the Network Config format in Netplan serves as inspiration: https://netplan.readthedocs.io/en/latest/netplan-yaml/ |
@displague should we also give options to define existing metal gateways / create a new one ? Since VRF IPs are used with metal gateways |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
User Story
As a user of Cluster API it would be great to be able to lunch instances on Equinix Metal with Layer 2/Hybrid networking and to be able to specify custom set of private IPs for L2 interfaces.
Detailed Description
By default all servers that are created on Equinix Metal via Cluster API have Layer 3 networking and there is no option when provisioning Equinix Metal cluster to specify type of networking for instances or to create additional L2 interfaces with specific local IP addresses and VLAN.
It would be great to implement ability to specify following when provisioning Equinix Metal cluster:
The text was updated successfully, but these errors were encountered: