Replies: 2 comments 2 replies
-
|
actually I found the answer to the first question in the makefile: |
Beta Was this translation helpful? Give feedback.
2 replies
-
|
thanks for the details :) ! (closing discussion) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello :). I'm currently playing with this really cool project -> nice design !
Several (mostly bpf related) questions:
Why is the vmlinux.h.gz btf file provided to generate bpf programs ? This is less robust than generating and extracting it from the target kernel at (cross)compile time no ? Do you have any compatibility inventory between the types in this file and the kernel versions the bpf programs compiled from it will run onto ?
Can you confirm I'm understanding the following correctly ?
regarding the bpf programs there are two different programs:
Regarding this issue:
use Pod IP to track connections instead of container PID #85
when you say: "if the socket is created by a nested process of 2 layers deep it will not be tracked." -> do you mean because here you only look for a 'pid' either equal to the tgid or the ppid in the map ?
zeropod/socket/kprobe.c
Line 75 in e75ed9a
tracking with the pod ip instead of the pid would have drawbacks as well no ? (like not tracking accepts called from within the container on the lo interface but I agree this is an edge case we may ignore)
Can you detail the current limitations regarding the maximum number of pods that can be tracked/checkpointed on a single node ?
What would happen if there were more than this ?
For example when I look here:
zeropod/socket/kprobe.c
Line 16 in e75ed9a
if I understand corrrectly we cannot track more than 1024 pids (eg 1024 init processes/containers/pods) on a given node right ?
But the consequence of this is not that fatal right ? looking here:
zeropod/socket/ebpf.go
Line 181 in e75ed9a
https://github.com/ctrox/zeropod/blob/main/shim/container.go#L403
it just means the next pods/container init pids will never be a candidate for scale down right ?
There is sth that I'm not understanding with the noop (aka fallback) tracker:
here:
zeropod/socket/noop.go
Line 28 in e75ed9a
Setting the last activity to the idle scale down duration will systematically trigger a scaledown after the annotated duration no ?
don't we prefer never to scale down if the bpf tracker cannot be setup ?
thanks :)
Beta Was this translation helpful? Give feedback.
All reactions