You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Let's say I have a classical hub and spoke network topology and I would like to create an AKS private cluster in a subnet of the spoke VNET s. I use a custom DNS server in the hub VNET h and the private DNS zones, by following Azure best practices, are only linked to the hub VNET h, including the privatelink.<my_region>.azmk8s.io one, already existing prior to the creation of the cluster. The spoke VNET s is configured to use the custom DNS server in h to resolve DNS requests.
Now, when I create the private cluster by providing it the private DNS zone id of privatelink.<my_region>.azmk8s.io, grant it the role Private DNS Zone Contributor on the private DNS zone and Contributor role on the node pool subnets (not on the whole VNET s), which reside all in the VNET s, it'll throw an error during the creation of the cluster with Terraform, which I don't believe is related to Terraform.
Expected behavior
The private cluster should accept the hub VNET as single source and solution of DNS when it creates the private endpoint in the spoke VNET, because that's what Azure suggests us do.
Current behavior
The private cluster tries to join the private DNS zone I gave it to the spoke VNET, if it's not already joined. In case it doesn't have enough permissions (because I only gave it permissions on the subnet level not on the VNET level in s), it throws an error.
A workaround
I could link the private DNS zone both to my hub and spoke VNET which would solve the problem during creation, but this is not how it should be, because the spoke VNET link will be never used.
Did I use any preview features?
Some aks preview features are enabled in the subscription of the spoke VNET. However, I was not using any preview features related to this bug, in particular I was not using api server VNET integration and AKS was supposed to create a private endpoint for me to access the control plane.
To Reproduce
You can just use the most recent Terraform azurerm_kubernetes_cluster resource, by using 2 user assigned managed identities resp. for the cluster and for kubelet, network is CNI overlay + Cilium. You need to provide
dns_prefix_private_cluster = "something-you-like"private_cluster_enabled = trueprivate_dns_zone_id = var.private_dns_zone_id # id of the private DNS zone `privatelink.<my_region>.azmk8s.io`private_cluster_public_fqdn_enabled = false
Environment (please complete the following information):
Terraform v1.9.4 with azurerm 4.20, but again I don't think this is Terraform relevant
Kubernetes version 1.31.5
The text was updated successfully, but these errors were encountered:
The private DNS zone is linked only to the VNet that the cluster nodes are attached to (3). This means that the private endpoint can only be resolved by hosts in that linked VNet. In scenarios where no custom DNS is configured on the VNet (default), this works without issue as hosts point at 168.63.129.16 for DNS that can resolve records in the private DNS zone because of the link.
Describe the bug
Let's say I have a classical hub and spoke network topology and I would like to create an AKS private cluster in a subnet of the spoke VNET s. I use a custom DNS server in the hub VNET h and the private DNS zones, by following Azure best practices, are only linked to the hub VNET h, including the
privatelink.<my_region>.azmk8s.io
one, already existing prior to the creation of the cluster. The spoke VNET s is configured to use the custom DNS server in h to resolve DNS requests.Now, when I create the private cluster by providing it the private DNS zone id of
privatelink.<my_region>.azmk8s.io
, grant it the rolePrivate DNS Zone Contributor
on the private DNS zone andContributor
role on the node pool subnets (not on the whole VNET s), which reside all in the VNET s, it'll throw an error during the creation of the cluster with Terraform, which I don't believe is related to Terraform.Expected behavior
The private cluster should accept the hub VNET as single source and solution of DNS when it creates the private endpoint in the spoke VNET, because that's what Azure suggests us do.
Current behavior
The private cluster tries to join the private DNS zone I gave it to the spoke VNET, if it's not already joined. In case it doesn't have enough permissions (because I only gave it permissions on the subnet level not on the VNET level in s), it throws an error.
A workaround
I could link the private DNS zone both to my hub and spoke VNET which would solve the problem during creation, but this is not how it should be, because the spoke VNET link will be never used.
Did I use any preview features?
Some aks preview features are enabled in the subscription of the spoke VNET. However, I was not using any preview features related to this bug, in particular I was not using api server VNET integration and AKS was supposed to create a private endpoint for me to access the control plane.
To Reproduce
You can just use the most recent Terraform
azurerm_kubernetes_cluster
resource, by using 2 user assigned managed identities resp. for the cluster and for kubelet, network is CNI overlay + Cilium. You need to provideEnvironment (please complete the following information):
The text was updated successfully, but these errors were encountered: