Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add rate limit whitelist #1210

Merged
merged 1 commit into from
Aug 22, 2017
Merged

add rate limit whitelist #1210

merged 1 commit into from
Aug 22, 2017

Conversation

sethpollack
Copy link
Contributor

No description provided.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Aug 22, 2017
@k8s-reviewable
Copy link

This change is Reviewable

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.04%) to 44.417% when pulling 44f18f9 on sethpollack:whitelist into 1da974f on kubernetes:master.

@@ -55,6 +59,10 @@ type RateLimit struct {
LimitRate int `json:"limit-rate"`

LimitRateAfter int `json:"limit-rate-after"`

Name string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add the json tag

@sethpollack
Copy link
Contributor Author

@aledbf done.

Any thoughts on https://github.com/kubernetes/ingress/pull/1210/files#diff-2e6661544fdede38b5f2d11d5e28f1b8R343 I couldn't use the ingress namespace/name because - and . are valid characters an ingress name, but not in an Nginx variable. And simply replacing the characters does not guarantee uniqueness.

@sethpollack
Copy link
Contributor Author

Here is the generated nginx.conf


worker_processes 2;
pid /run/nginx.pid;

worker_rlimit_nofile 523264;
worker_shutdown_timeout 10s;

events {
    multi_accept        on;
    worker_connections  16384;
    use                 epoll;
}

http {
    real_ip_header      proxy_protocol;

    real_ip_recursive   on;
    set_real_ip_from    0.0.0.0/0;

    geoip_country       /etc/nginx/GeoIP.dat;
    geoip_city          /etc/nginx/GeoLiteCity.dat;
    geoip_proxy_recursive on;
    sendfile            on;
    aio                 threads;
    tcp_nopush          on;
    tcp_nodelay         on;

    log_subrequest      on;

    reset_timedout_connection on;

    keepalive_timeout  75s;
    keepalive_requests 100;

    client_header_buffer_size       1k;
    large_client_header_buffers     4 8k;
    client_body_buffer_size         8k;

    http2_max_field_size            4k;
    http2_max_header_size           16k;

    types_hash_max_size             2048;
    server_names_hash_max_size      1024;
    server_names_hash_bucket_size   32;
    map_hash_bucket_size            64;

    proxy_headers_hash_max_size     512;
    proxy_headers_hash_bucket_size  64;

    variables_hash_bucket_size      64;
    variables_hash_max_size         2048;

    underscores_in_headers          off;
    ignore_invalid_headers          on;

    include /etc/nginx/mime.types;
    default_type text/html;
    gzip on;
    gzip_comp_level 5;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
    gzip_proxied any;

    # Custom headers for response

    server_tokens on;

    # disable warnings
    uninitialized_variable_warn off;

    log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status';

    map $request_uri $loggable {
        default 1;
    }

    access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
    error_log  /var/log/nginx/error.log notice;

    resolver 100.64.0.10 valid=30s;

    # Retain the default nginx handling of requests without a "Connection" header
    map $http_upgrade $connection_upgrade {
        default          upgrade;
        ''               close;
    }

    # trust http_x_forwarded_proto headers correctly indicate ssl offloading
    map $http_x_forwarded_proto $pass_access_scheme {
        default          $http_x_forwarded_proto;
        ''               $scheme;
    }

    map $http_x_forwarded_port $pass_server_port {
       default           $http_x_forwarded_port;
       ''                $server_port;
    }

    map $http_x_forwarded_for $the_real_ip {
        default          $http_x_forwarded_for;
        ''               $proxy_protocol_addr;
    }

    map $pass_server_port $pass_port {
        443              443;
        default          $pass_server_port;
    }

    # Map a response error watching the header Content-Type
    map $http_accept $httpAccept {
        default          html;
        application/json json;
        application/xml  xml;
        text/plain       text;
    }

    map $httpAccept $httpReturnType {
        default          text/html;
        json             application/json;
        xml              application/xml;
        text             text/plain;
    }

    # Obtain best http host
    map $http_host $this_host {
        default          $http_host;
        ''               $host;
    }

    map $http_x_forwarded_host $best_http_host {
        default          $http_x_forwarded_host;
        ''               $this_host;
    }

    server_name_in_redirect off;
    port_in_redirect        off;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    # turn on session caching to drastically improve performance
    ssl_session_cache builtin:1000 shared:SSL:10m;
    ssl_session_timeout 10m;

    # allow configuring ssl session tickets
    ssl_session_tickets on;

    # slightly reduce the time-to-first-byte
    ssl_buffer_size 4k;

    # allow configuring custom ssl ciphers
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    ssl_prefer_server_ciphers on;

    ssl_ecdh_curve secp384r1;

    proxy_ssl_session_reuse on;

    upstream upstream-default-backend {
        # Load balance algorithm; empty for round robin, which is the default
        least_conn;
        server 100.96.0.4:8080 max_fails=0 fail_timeout=0;
    }

    upstream default-nginx-default-backend-80 {
        # Load balance algorithm; empty for round robin, which is the default
        least_conn;
        server 100.96.0.4:8080 max_fails=0 fail_timeout=0;
    }

    geo $ZGVmYXVsdF9zYW1wbGUtYXBw_whitelist {
        default 0;
        127.0.0.1 1;
    }

    map $ZGVmYXVsdF9zYW1wbGUtYXBw_whitelist $ZGVmYXVsdF9zYW1wbGUtYXBw_limit {
        0 $binary_remote_addr;
        1 "";
    }
    limit_req_zone $ZGVmYXVsdF9zYW1wbGUtYXBw_limit zone=default_sample-app_rpm:5m rate=10r/m;
    server {
        server_name _;
        listen 80 proxy_protocol default_server reuseport backlog=511;
        listen [::]:80 proxy_protocol default_server reuseport backlog=511;
        set $proxy_upstream_name "-";

        listen 443 proxy_protocol  default_server reuseport backlog=511 ssl http2;
        listen [::]:443 proxy_protocol  default_server reuseport backlog=511 ssl http2;        
        # PEM sha: 8264bb9cf0d2d0844812734ae2a2d0ef6782b74b
        ssl_certificate                         /ingress-controller/ssl/default-fake-certificate.pem;
        ssl_certificate_key                     /ingress-controller/ssl/default-fake-certificate.pem;

        more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains;";
        location / {
        set $proxy_upstream_name "upstream-default-backend";

        port_in_redirect off;

        client_max_body_size                    "1m";

        proxy_set_header Host                   $best_http_host;

        # Pass the extracted client certificate to the backend

        # Allow websocket connections
        proxy_set_header                        Upgrade           $http_upgrade;
        proxy_set_header                        Connection        $connection_upgrade;

        proxy_set_header X-Real-IP              $the_real_ip;
        proxy_set_header X-Forwarded-For        $the_real_ip;
        proxy_set_header X-Forwarded-Host       $best_http_host;
        proxy_set_header X-Forwarded-Port       $pass_port;
        proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
        proxy_set_header X-Original-URI         $request_uri;
        proxy_set_header X-Scheme               $pass_access_scheme;

        # mitigate HTTPoxy Vulnerability
        # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
        proxy_set_header Proxy                  "";

        # Custom headers to proxied server

        proxy_connect_timeout                   5s;
        proxy_send_timeout                      60s;
        proxy_read_timeout                      60s;

        proxy_redirect                          off;
        proxy_buffering                         off;
        proxy_buffer_size                       "4k";
        proxy_buffers                           4 "4k";

        proxy_http_version                      1.1;

        proxy_cookie_domain                     off;
        proxy_cookie_path                       off;

        # In case of errors try the next upstream server before returning an error
        proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

        proxy_pass http://upstream-default-backend;
        }

        # health checks in cloud providers require the use of port 80
        location /healthz {
        access_log off;
        return 200;
        }

        # this is required to avoid error if nginx is being monitored
        # with an external software (like sysdig)
        location /nginx_status {
        allow 127.0.0.1;
        allow ::1;
        deny all;

        access_log off;
        stub_status on;
        }
    }
    server {
        server_name sample-app.com;
        listen 80 proxy_protocol;
        listen [::]:80 proxy_protocol;
        set $proxy_upstream_name "-";
        location /foo {
        set $proxy_upstream_name "default-nginx-default-backend-80";

        port_in_redirect off;

        limit_req zone=default_sample-app_rpm burst=50 nodelay;
        client_max_body_size                    "1m";

        proxy_set_header Host                   $best_http_host;

        # Pass the extracted client certificate to the backend

        # Allow websocket connections
        proxy_set_header                        Upgrade           $http_upgrade;
        proxy_set_header                        Connection        $connection_upgrade;

        proxy_set_header X-Real-IP              $the_real_ip;
        proxy_set_header X-Forwarded-For        $the_real_ip;
        proxy_set_header X-Forwarded-Host       $best_http_host;
        proxy_set_header X-Forwarded-Port       $pass_port;
        proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
        proxy_set_header X-Original-URI         $request_uri;
        proxy_set_header X-Scheme               $pass_access_scheme;

        # mitigate HTTPoxy Vulnerability
        # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
        proxy_set_header Proxy                  "";

        # Custom headers to proxied server

        proxy_connect_timeout                   5s;
        proxy_send_timeout                      60s;
        proxy_read_timeout                      60s;

        proxy_redirect                          off;
        proxy_buffering                         off;
        proxy_buffer_size                       "4k";
        proxy_buffers                           4 "4k";

        proxy_http_version                      1.1;

        proxy_cookie_domain                     off;
        proxy_cookie_path                       off;

        # In case of errors try the next upstream server before returning an error
        proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

        proxy_pass http://default-nginx-default-backend-80;
        }
        location / {
        set $proxy_upstream_name "upstream-default-backend";

        port_in_redirect off;

        client_max_body_size                    "1m";

        proxy_set_header Host                   $best_http_host;

        # Pass the extracted client certificate to the backend

        # Allow websocket connections
        proxy_set_header                        Upgrade           $http_upgrade;
        proxy_set_header                        Connection        $connection_upgrade;

        proxy_set_header X-Real-IP              $the_real_ip;
        proxy_set_header X-Forwarded-For        $the_real_ip;
        proxy_set_header X-Forwarded-Host       $best_http_host;
        proxy_set_header X-Forwarded-Port       $pass_port;
        proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
        proxy_set_header X-Original-URI         $request_uri;
        proxy_set_header X-Scheme               $pass_access_scheme;

        # mitigate HTTPoxy Vulnerability
        # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
        proxy_set_header Proxy                  "";

        # Custom headers to proxied server

        proxy_connect_timeout                   5s;
        proxy_send_timeout                      60s;
        proxy_read_timeout                      60s;

        proxy_redirect                          off;
        proxy_buffering                         off;
        proxy_buffer_size                       "4k";
        proxy_buffers                           4 "4k";

        proxy_http_version                      1.1;

        proxy_cookie_domain                     off;
        proxy_cookie_path                       off;

        # In case of errors try the next upstream server before returning an error
        proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

        proxy_pass http://upstream-default-backend;
        }
    }

    # default server, used for NGINX healthcheck and access to nginx stats
    server {
        # Use the port 18080 (random value just to avoid known ports) as default port for nginx.
        # Changing this value requires a change in:
        # https://github.com/kubernetes/ingress/blob/master/controllers/nginx/pkg/cmd/controller/nginx.go
        listen 18080 default_server reuseport backlog=511;
        listen [::]:18080 default_server reuseport backlog=511;
        set $proxy_upstream_name "-";

        location /healthz {
            access_log off;
            return 200;
        }

        location /nginx_status {
            set $proxy_upstream_name "internal";

            access_log off;
            stub_status on;
        }

        # this location is used to extract nginx metrics
        # using prometheus.
        # TODO: enable extraction for vts module.
        location /internal_nginx_status {
            set $proxy_upstream_name "internal";

            allow 127.0.0.1;
            allow ::1;
            deny all;

            access_log off;
            stub_status on;
        }

        location / {
            set $proxy_upstream_name "upstream-default-backend";
            proxy_pass             http://upstream-default-backend;
        }

    }

    # default server for services without endpoints
    server {
        listen 8181;
        set $proxy_upstream_name "-";

        location / {
            return 503;
        }
    }
}

stream {
    log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;

    access_log /var/log/nginx/access.log log_stream;

    error_log  /var/log/nginx/error.log;

    # TCP services

    # UDP services
}

@aledbf aledbf self-assigned this Aug 22, 2017
@aledbf
Copy link
Member

aledbf commented Aug 22, 2017

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 22, 2017
@aledbf
Copy link
Member

aledbf commented Aug 22, 2017

@sethpollack this looks great. Thanks!

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.004%) to 44.456% when pulling 6253c34 on sethpollack:whitelist into 1da974f on kubernetes:master.

@aledbf
Copy link
Member

aledbf commented Aug 22, 2017

Any thoughts on https://github.com/kubernetes/ingress/pull/1210/files#diff-2e6661544fdede38b5f2d11d5e28f1b8R343 I couldn't use the ingress namespace/name because - and . are valid characters an ingress name, but not in an Nginx variable. And simply replacing the characters does not guarantee uniqueness.

To keep this simple I think we can add a random prefix.

@sethpollack
Copy link
Contributor Author

Do you think it makes more sense to do the encoding in ingress/annotations/ratelimit/main.go when we set the RateLimit name instead of using buildWhitelistVariable?

@aledbf aledbf merged commit def5155 into kubernetes:master Aug 22, 2017
@aledbf
Copy link
Member

aledbf commented Aug 22, 2017

Do you think it makes more sense to do the encoding in ingress/annotations/ratelimit/main.go when we set the RateLimit name instead of using buildWhitelistVariable?

No because if we are runnning with --v=3 we dump the configuration json and if we encode the name we don't know the real name.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants