-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert #3588 "IPv6 internal node IPs are usable externally" #4574
base: master
Are you sure you want to change the base?
Revert #3588 "IPv6 internal node IPs are usable externally" #4574
Conversation
Hi @jonasbadstuebner. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Kubernetes does not publish IPv6 addresses as external node IPs; it only publishes them as internal. Reverting this commit would results in IPv6 addresses never being published as external. If you don't want the addresses published externally, don't request the nodes to be published externally. I object to this PR. |
I am unsure what you mean by „Kubernetes publishes IPs“. Could you explain, what you mean by that? The official documentation on Node addresses states that an InternalIP is „typically“ only to be reachable from within the cluster. The reasoning behind your change is intransparent to me. I don’t understand why external-dns should behave this way? |
@jonasbadstuebner Wdyt of PR #4593 ? Would that solve your issue ? |
@mloiseleur I not sure if your PR fixes this but as stated in #3588 (comment), it is currently not possible to differentiate between internal and external ipv6 addresses for e.g. NodePorts or the K8s Node itself. |
Thank you for the reply, @mloiseleur, as @sebastiangaiser said, this is not the issue we are having. We want AAAA records, but only for IPv6 addresses that are But external-dns does not give us the ability to disable AAAA records for
Where the node has:
If someone wants to have public DNS records created for Nodes, they should set the respective IP as For a DNS record to be created, it should not be enough that the IP simply is a v6. If you have a different opinion, please elaborate. And as I said, a CCM not adding the IPv6 address as |
@jonasbadstuebner I understand your reasoning. Since some users will want to publish it, as described in #1875 @johngmyers would that solve your objection ? |
@mloiseleur Thank you for the reply, The linked issue does not say anything about what kind of IPv6 should be exposed. |
Sorry my message was not clear. You're right : the linked issue does not say anything about what kind of IPv6 should be exposed. Maybe I'm wrong, but I'm guessing from this fact that some users are fine with publishing the internal NodeIPv6. AS @johngmyers said in #3588, it can happen than IPv6 node addresses are reported as type |
My reasoning is, if this is the case for your cluster, where the IPv6 marked as I am repeating myself, please read my comments in this PR and also arguments in the linked issue #4566. |
Thanks for taking this time. I double checked and it seems there is nothing that forbid a user to mark correctly the IP. /ok-to-test |
Thank you very much. Because this is changing the behavior of external-dns and might cause outages for users, if they don't double-check their setup to have the correct type for their NodeIPs set. |
@johngmyers can you please provide a more detailed answer to the comments that followed your last one? I'd love to understand better both your point of view and the one of the author of this PR. |
It could be that users of the feature (maybe wrong feature!?) do not control the environment so they have no choice. From my side, already stated a bunch of times, I personally think we should just drop DNS for pods/nodes. I won't actively do this, but IMO external-dns should not be the tool to provide all kind of DNS workarounds for every application that exists. It just adds complexity on our side and time is limited and external-dns was clearly build around ingress (and svc type loadbalancer), so everything that is similar to ingress will work reliable and rest is likely buggy (we can see this in all kind of PRs, issues, questions in chat, ...). |
@szuecs Please give me and everyone else involved or interested a good explanation of why the feature was approved as it is right now. I think „users might not be able to change this“ is not a good reason for an annotation as a feature flag, because if you can annotate the Node, you should also be able to change the IP type. If you are not, you should not be the one running external-dns for Node-DNS entries in the first place. Dropping support completely would force users of this feature to migrate to a completely different tool or have multiple different tools for handling DNS entries. The first option is probably not in the interest of external-dns and the second one would be way harder to operate. |
Please be friendly. Right now it seems that you are not. I approved the feature because code looked fine and John was doing major work on ipv6/AAAA part. Maybe he was testing clusters of a type (likely aws, because he is kops maintainer), that had the behavior as he wrote.
Again please try to be friendly even if you are disappointed. I meant the externalIp vs internalIp is not under user control if you think of user as cluster admin. You seem to be a provider user therefore you control these fields. Cluster admins don't.
I agree that a breaking change is bad and I didn't notice it back then.
As I said earlier I will not do it.
That's a fair question and because we support node. |
I am sorry that my pragmatic approach comes over as being unfriendly and that I seem to not hide my frustration very well.
You are right that I am the provider part. And probably I don't know every use case of external-dns. If there is a use case for InternalIPs being published, it should be covered. IMO, there is only the one use case where a Node has only InternalIPs set. These are already published in that case. Whatever the solution to this specific problem is or might be - it should be the same behavior for IPv4 and IPv6. If AWS CCM sets only the IPv4 as ExternalIP and the publicly reachable IPv6 as InternalIP, this would match the problem John was seeing. But I don't use AWS and never did, so I wouldn't know. Maybe it would be an idea to run the same fallback logic for IPv4 and IPv6 separately, meaning if you have:
It should lead to the ExternalIPv4 to be published - because ExternalIP is chosen over Internalip - and the InternalIPv6, since there is no public v6 one. But then it would make sense to have a flag that chooses the running mode of external-dns, so that I can choose to run it in IPv4 only mode, even though I have an IPv6 set. And vice versa. Also with an option for having both.
I am sorry about that.
I am sure of it. And as I said already, I never wanted to discredit any work done. To me it seems like I just was "left on read" whilst having a valid point.
I deployed the version of this PR to fix my problem. |
We are discussing external-dns change in https://kubernetes.slack.com/archives/C771MKDKQ/p1723531094274339 John explicitly stated that he needs InternalIP to support AWS, which is likely the main case he was developing the PR for. The PR is more than 1y old and I don't believe we should just revert the PR, if there was no issue until now. My current suggestion was that we could have a provider specific override for the handling of these node IPs. This would enable providers to do the right thing based on provider specifics, which could also be provider specific options. Interesting will be how we fit this into webhook provider, but there are already ways how to pass provider specifics there so I think we can do this. |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This reverts commit 683663e.
c45b53b
to
b3fe7f2
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Please give the ability to not expose IPv6 InternalIPs and/or revert this change. As is, using ExternalIP as routable and InternalIP ULA addresses for the cluster, results in the ULA IPv6 to be added to the DNS record. |
I totally agree with #4808 (comment) |
@szuecs @mloiseleur @Raffo please consider the breaking change as this behavior is unexpected from a user perspective. I totally respect decisions from the past but products/tools need to evolve in order to don't become replaced at some point. |
Description
This reverts commit 683663e from #3588.
As stated in #4566, the behavior of IPv4 and IPv6 addresses should not differ the way it currently does.
This PR makes external-dns respect the difference between a Kubernetes
internal
andexternal
Node-IP (again).The tests are updated so that they check with both Internal and External IP where it makes sense to do so.
Reasoning
The fact that IPv6 addresses could be reached from the global networks is not a good enough reason to remove the possibility to not create IPv6 DNS records.
If you want to have external-dns create
AAAA
records for your "private" IPv6 addresses, you should mark them asExternalIP
, not have external-dns create them no matter what.Fixes #4566
Checklist
Additional Notes
The next release after this PR should include a "Important Changes" section mentioning this change. Even though v0.13.5 did not have this for #3588. IMO it should have had it too.