Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR reworks
tpod-proxy
/tpodproxy
to be a proxy for the actual underlying HTTP pod, and not a proxy for thetpodserver
. The main benefit of doing this is that this way users can connect to a specific version of the target pod, and know that the request will fail if the publisher updates the pod later on—thus eliminating a possible TOCTOU (time to check to time to use) vulnerability where a user checks a pod before sending a request, but the publisher swaps the pod out in the meantime. This also happens to bring the implementation closer to the design in #35.Architecturally the proxy implemented by this PR is part of the the same auto-scaled Pod as the rest of the user's code, just like the earlier proxy. However, the new version takes all requests from the Keda HTTP scaler, and then routes them to the appropriate version of the user pod, in case it's available, or returns an error if the user pod's version has changed (by making use of the fact that it should fail to connect to the old version). It accomplishes that by using an extra service for each versions, which seemed like the cleanest Kubernetes way to achieve the result, though it might be a bit unorthodox.
Operationally, this PR's proxy works as follows:
From the perspective of the user, there are five steps to using an Apocryph app with attestation:
/.well-known/network.apocryph.attest
.X-Apocryph-Expected
, which holds an internal Apocryph value that is based on hashing all the image URLs together. The proxy's main job is ensuring that the request is routed to the pod only if the header matches.Something that might require deeper thinking later on is figuring out a way for Web-based clients to pin the later requests to the same certificate that was attested earlier. Probably
new WebTransport({serverCertificateHashes: ...})
could work for that, especially as it's now supported by both Chrome and Firefox.In terms of security, I'm a bit worried about having both the proxy and user application in the same pod; for sandboxing, I'd like for them to be separate. Doing this would require reaching deeper into KEDA HTTP's internals, and managing the ScaledObjects manually, which is why I opted to leave the two together for now.
This PR builds on top of #53.