Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabling modsecurity rules per location is extremely slow #12927

Open
champtar opened this issue Mar 4, 2025 · 4 comments
Open

Enabling modsecurity rules per location is extremely slow #12927

champtar opened this issue Mar 4, 2025 · 4 comments
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@champtar
Copy link

champtar commented Mar 4, 2025

What happened:

Installing multiple application that each provide their ingress and want to enable modsecurity

What you expected to happen:

It works :)

NGINX Ingress controller version (exec into the pod and run /nginx-ingress-controller --version):

kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version  
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.11.3
  Build:         0106de65cfccb74405a6dfa7d9daffc6f0a6ef1a
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.25.5

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

Client Version: v1.32.1
Kustomize Version: v5.5.0
Server Version: v1.32.1

Environment:

  • Cloud provider or hardware configuration:

  • OS (e.g. from /etc/os-release): Alma 9.5

  • Kernel (e.g. uname -a): 5.14.0-503.16.1.el9_5.x86_64

  • Install tools:

    • kubeadm
  • How was the ingress-nginx-controller installed:

# helm ls -A | grep -i ingress
ingress-nginx	ingress-nginx	8       	2025-03-04 01:16:37.26687465 +0000 UTC	deployed	ingress-nginx-4.11.3	1.11.3

# helm -n ingress-nginx get values ingress-nginx
USER-SUPPLIED VALUES:
controller:
  admissionWebhooks:
    enabled: true
    patch:
      enabled: true
  allowSnippetAnnotations: true
  config:
    worker-processes: "2"
  containerPort:
    http: 80
    https: 443
  extraArgs:
    default-ssl-certificate: ingress-nginx/cockpit-certificate
  hostPort:
    enabled: true
    ports:
      http: 8080
      https: 8443
  image:
    digest: ""
    registry: 127.0.0.1:5000
  kind: Deployment
  metrics:
    enabled: false
  service:
    enabled: false
  terminationGracePeriodSeconds: 5
  updateStrategy:
    type: Recreate
defaultBackend:
  enabled: false
  • Current State of the controller:
# kubectl describe ingressclasses
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.11.3
              helm.sh/chart=ingress-nginx-4.11.3
Annotations:  meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: ingress-nginx
Controller:   k8s.io/ingress-nginx
Events:       <none>
# kubectl -n ingress-nginx get all -o wide
NAME                                           READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
pod/ingress-nginx-controller-59f6c869b-nss82   1/1     Running   0          16m   198.18.0.23   appliance   <none>           <none>

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE    SELECTOR
service/cockpit                              ClusterIP   None            <none>        443/TCP   106m   <none>
service/ingress-nginx-controller-admission   ClusterIP   198.18.218.72   <none>        443/TCP   16m    app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                            SELECTOR
deployment.apps/ingress-nginx-controller   1/1     1            1           79m   controller   127.0.0.1:5000/ingress-nginx/controller:v1.11.3   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                                  DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                                            SELECTOR
replicaset.apps/ingress-nginx-controller-59f6c869b    1         1         1       79m   controller   127.0.0.1:5000/ingress-nginx/controller:v1.11.3   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=59f6c869b
  • Current state of ingress object, if applicable:

  • Others:

How to reproduce this issue:

ingress-nginx with validation webhook enabled, you can bump the timeout to 30s

for i in $(seq 1 100); do
echo $i
date
time kubectl apply -f- <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-$i
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/enable-modsecurity: "true"
    nginx.ingress.kubernetes.io/enable-owasp-core-rules: "true"
    #nginx.ingress.kubernetes.io/modsecurity-snippet:
    #  Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf
spec:
  ingressClassName: nginx
  rules:
    - http:
        paths:
          - pathType: Prefix
            backend:
              service:
                name: kubernetes
                port:
                  number: 443
            path: /$i
EOF
done

6 ingress still work
7 it takes more than 10s
12 it takes more than 30s

6
Tue Mar  4 01:20:08 UTC 2025
ingress.networking.k8s.io/test-6 created

real	0m7.461s
user	0m0.051s
sys	0m0.010s
7
Tue Mar  4 01:20:16 UTC 2025
ingress.networking.k8s.io/test-7 created

real	0m10.940s
user	0m0.049s
sys	0m0.013s
8
Tue Mar  4 01:20:26 UTC 2025
ingress.networking.k8s.io/test-8 created

real	0m14.302s
user	0m0.054s
sys	0m0.008s
9
Tue Mar  4 01:20:41 UTC 2025
ingress.networking.k8s.io/test-9 created

real	0m15.154s
user	0m0.056s
sys	0m0.008s
10
Tue Mar  4 01:20:56 UTC 2025
ingress.networking.k8s.io/test-10 created

real	0m23.889s
user	0m0.051s
sys	0m0.011s
11
Tue Mar  4 01:21:20 UTC 2025
ingress.networking.k8s.io/test-11 created

real	0m28.684s
user	0m0.047s
sys	0m0.014s
12
Tue Mar  4 01:21:49 UTC 2025
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=30s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

real	0m30.053s
user	0m0.046s
sys	0m0.012s

Anything else we need to know:

A good solution (for me) would be to be able to load the rules once but still have modsecurity be opt-in instead of opt-out for each ingress, ie:

config:
    enable-modsecurity: "true"
    enable-owasp-core-rules: "true"
    enable-modsecurity-by-default: "false"
@champtar champtar added the kind/bug Categorizes issue or PR as related to a bug. label Mar 4, 2025
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 4, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@longwuyuan
Copy link
Contributor

/remove-kind bug
/kind support

  • Only you can debug as to where the time is being spent
  • I dont see any service --type Loadbalancer so wonder how you route from outside the cluster to the application pods
  • Eventually we want to deprecate modsecurity when possible because its hard to support and maintain
  • Also this kind of packet filtering is better done at the edge of the network like on the Infra-Provider

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Mar 4, 2025
@champtar
Copy link
Author

* Only you can debug as to where the time is being spent

I'm only talking about the configuration performance here, haven't even looked if it affects requests performance in general (troubleshooting for a colleague)
Time is spent loading (compiling?) the rules.

* I dont see any service --type Loadbalancer so wonder how you route from outside the cluster to the application pods

hostPort

* Eventually we want to deprecate modsecurity when possible because its hard to support and maintain

Is there a proper ticket to track this ? Should this be added to the release notes ?
That would avoid people investing time to use ingress-nginx + modsecurity

* Also this kind of packet filtering is better done at the edge of the network like on the Infra-Provider

My use case is single server appliance / on prem, so there are no magic Infra-Provider :(

@longwuyuan
Copy link
Contributor

Thanks for the updated info. It helps.

The project e2e tests do not include a combo of hostPort + modsecurity. So I am not sure what data to base any comments on.

Is there a chance that you can also test the same config, without hostPort. Just to know if that has a impact.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

3 participants