Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Private cluster in spoke VNET with custom DNS in hub VNET tries to join the private DNS zone, linked to hub VNET, to the spoke one #4841

Open
xi4n opened this issue Mar 6, 2025 · 1 comment

Comments

@xi4n
Copy link

xi4n commented Mar 6, 2025

Describe the bug
Let's say I have a classical hub and spoke network topology and I would like to create an AKS private cluster in a subnet of the spoke VNET s. I use a custom DNS server in the hub VNET h and the private DNS zones, by following Azure best practices, are only linked to the hub VNET h, including the privatelink.<my_region>.azmk8s.io one, already existing prior to the creation of the cluster. The spoke VNET s is configured to use the custom DNS server in h to resolve DNS requests.

Now, when I create the private cluster by providing it the private DNS zone id of privatelink.<my_region>.azmk8s.io, grant it the role Private DNS Zone Contributor on the private DNS zone and Contributor role on the node pool subnets (not on the whole VNET s), which reside all in the VNET s, it'll throw an error during the creation of the cluster with Terraform, which I don't believe is related to Terraform.

Expected behavior
The private cluster should accept the hub VNET as single source and solution of DNS when it creates the private endpoint in the spoke VNET, because that's what Azure suggests us do.

Current behavior
The private cluster tries to join the private DNS zone I gave it to the spoke VNET, if it's not already joined. In case it doesn't have enough permissions (because I only gave it permissions on the subnet level not on the VNET level in s), it throws an error.

A workaround
I could link the private DNS zone both to my hub and spoke VNET which would solve the problem during creation, but this is not how it should be, because the spoke VNET link will be never used.

Did I use any preview features?
Some aks preview features are enabled in the subscription of the spoke VNET. However, I was not using any preview features related to this bug, in particular I was not using api server VNET integration and AKS was supposed to create a private endpoint for me to access the control plane.

To Reproduce
You can just use the most recent Terraform azurerm_kubernetes_cluster resource, by using 2 user assigned managed identities resp. for the cluster and for kubelet, network is CNI overlay + Cilium. You need to provide

dns_prefix_private_cluster = "something-you-like"
private_cluster_enabled = true
private_dns_zone_id = var.private_dns_zone_id # id of the private DNS zone `privatelink.<my_region>.azmk8s.io`
private_cluster_public_fqdn_enabled = false

Environment (please complete the following information):

  • Terraform v1.9.4 with azurerm 4.20, but again I don't think this is Terraform relevant
  • Kubernetes version 1.31.5
@asifkd012020
Copy link

@xi4n - this is by design behavior, read hub spoke private aks doco - https://learn.microsoft.com/en-us/azure/aks/private-clusters?tabs=default-basic-networking%2Cazure-portal#hub-and-spoke-with-custom-dns

The private DNS zone is linked only to the VNet that the cluster nodes are attached to (3). This means that the private endpoint can only be resolved by hosts in that linked VNet. In scenarios where no custom DNS is configured on the VNet (default), this works without issue as hosts point at 168.63.129.16 for DNS that can resolve records in the private DNS zone because of the link.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants