Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable access log for TCP services #2406

Closed
zegl opened this issue Apr 23, 2018 · 28 comments
Closed

Disable access log for TCP services #2406

zegl opened this issue Apr 23, 2018 · 28 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@zegl
Copy link

zegl commented Apr 23, 2018

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Feature request.

I'm using ingress-nginx to proxy TCP services, everything is working fine, but I'd like to have the option to disable the access logs for one (or all) of the TCP backends.

Nginx is creating lots of logs like this one: [23/Apr/2018:12:58:49 +0000]TCP200000.001. As they are not very informative I'd like to remove them completely. That does not seem to be possible at the moment.

The enable-access-log annotation is available for HTTP backends, but it is not available to TCP or UDP backends.

NGINX Ingress controller version:

quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0

@zegl
Copy link
Author

zegl commented May 17, 2018

I'm willing to contribute this feature. How would you suggest handling configuration options for L4 services?

I'm not a fan of the :PROXY:PROXY option, as it doesn't scale very well with the number of options. Using :PROXY:PROXY:NOLOG / ::NOLOG or something similar to disable access logging is an easy way to solve the problem, but it's not a very nice way.

Could we use some sort of key/value system for tagging L4 services with? Like this:

"9000": "foo/bar:80,accessLog=false,upstreamProxy=true,downstreamProxy=false"

Switching the format, while keeping backwards compatibility with the existing format should be easy.

What do you say @aledbf? Do you have a plan for how to handle this in the future?

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed enhancement labels Jun 5, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2018
@AndreaGiardini
Copy link

@aledbf Hey! :) We would appreciate if you could remove the lifecycle/stale label and give your feedback on this issue, since it looks like several people are involved :)

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 3, 2018
@zegl
Copy link
Author

zegl commented Oct 3, 2018

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 3, 2018
@aledbf
Copy link
Member

aledbf commented Oct 8, 2018

Closing. The TCP and UDP features are being removed in the next release

@aledbf aledbf closed this as completed Oct 8, 2018
@AndreaGiardini
Copy link

I don't understand... It is a feature that is being used by a lot of people, why remove it? Is there any place where we can discuss about it?

@aledbf
Copy link
Member

aledbf commented Oct 9, 2018

I don't understand... It is a feature that is being used by a lot of people, why remove it? Is there any place where we can discuss about it?

Please check my comment #3197 (comment)

@anton-johansson
Copy link

@aledbf: Could we re-open as the TCP and UDP features removal has been reverted?

I would also like to disable stream logs. I don't have any TCP or UDP services specified, but I still get some of these stream logs (not sure why it happens). They end up uncategorized in my ELK-stack. I could always just drop them in my Logstash pipeline, but I'd much rather not have them in the logs at all.

@aledbf aledbf reopened this Feb 12, 2019
@aledbf
Copy link
Member

aledbf commented Feb 12, 2019

@anton-johansson sure but someone needs to works on this :)

@anton-johansson
Copy link

@aledbf: Of course, but it's a start. :) Seems like a fairly small change, maybe suited for a first contribution?

What about just adding a 2nd setting:

    {{ if $cfg.DisableAccessLog OR $cfg.DisableStreamAccessLog }}
    access_log off;
    {{ else }}

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 13, 2019
@zegl
Copy link
Author

zegl commented May 14, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 14, 2019
@efedunin
Copy link

efedunin commented Jul 9, 2019

@zegl I have also experienced problems with default log format:

[09/Jul/2019:07:13:59 +0000]TCP20063440432430.738

It is absolutely useless and "unsearchable" (in Elasticsearch, for example).
This can be changed by option log-format-stream in ConfigMap.
I use the following format (note escaped quotes!):

"log-format-stream": "\"[$time_local] $protocol $status $upstream_addr $upstream_bytes_sent $upstream_bytes_received $upstream_connect_time $upstream_first_byte_time $upstream_session_time\""

This leads to the following output:

[09/Jul/2019:07:17:22 +0000] TCP 200 10.240.0.112:22 12606 3829557 0.000 0.004 0.352

Possible fix for default format is to add quotes into this string.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 7, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 6, 2019
@k8s-ci-robot k8s-ci-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 6, 2019
@zegl
Copy link
Author

zegl commented Nov 6, 2019

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 6, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 4, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 5, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@MeNsaaH
Copy link
Contributor

MeNsaaH commented Oct 24, 2021

Hello @zegl,
Was this ever implemented?

@Excodibur
Copy link

I would be interested as well in it. Currently the log-flood for TCP health-checks makes this feature almost unusable for me, sadly.

@Excodibur
Copy link

FYI, as a workaround I got it to a for me somewhat usable state, by effectively throwing away all TCP logmessages (to avoid the health-checks being logged). My change in HELM-chart configuration:

controller:
  config:
    stream-access-log-path: "/dev/null"

Regular HTTP requests are still logged, so I think this is good enough for me. 😄

@MeNsaaH
Copy link
Contributor

MeNsaaH commented Oct 27, 2021

@Excodibur this works.

But that'll disable all access logging. 😫

@Excodibur
Copy link

@MeNsaaH

For TCP access request logging, basically yes, it is disabled, The HTTP endpoints exposed over that same controller still generate access logs for me.

The ideal solution in my mind would be an option we could set to filter out (empty) health-check requests from access logs. Something similar to https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-option%20dontlognull.

@MeNsaaH
Copy link
Contributor

MeNsaaH commented Oct 27, 2021

I totally agree with that. An option to just filter out health-check requests is perfect. But, your solution however still works 😄

@Excodibur
Copy link

On further thought, perhaps it is doable by changing nginx.cfg manually to something like:

log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';

map $bytes_received $notAHealthCheck {
"~0"            0;
default         1;
}
access_log /var/log/nginx/access.log log_stream if=$notAHealthCheck;

But I really would like to stay away from altering the conf-file myself when using HELM charts. 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

9 participants