Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid log truncation due to log-rotation delays (when logfiles exceeds 1.8GB faster than log rotation) #55

Closed
yuwenma opened this issue Apr 5, 2019 · 20 comments · Fixed by #56
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@yuwenma
Copy link

yuwenma commented Apr 5, 2019

/kind feature
(mirrored from k/k issue 76210
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
klog supports "log-file" flag to specify the log file path. However, when the logfile size reaches 1.8GiB and log-rotation hasn't happened, the older logs will be truncated. See details in PR76190.

We want to keep all the log messages. If the log file goes too large, we expect to the older logs to be stored in a separate file kube-apiserver.log,gz..

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
This issue needs to be fixed at least 1 month before v1.15. Since it blocks some v1.15 release features (like rebasing images from debian/alpine to distroless heavily depends on the klog feature).

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 5, 2019
@DirectXMan12
Copy link

for future reference: you can reference PRs and issues by putting # sign in front of the number, like #76190. If they're cross repository, just put the repository name in front: kubernetes/kubernetes#76190

@DirectXMan12
Copy link

/priority important-soon

@k8s-ci-robot k8s-ci-robot added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Apr 5, 2019
@yuwenma
Copy link
Author

yuwenma commented Apr 5, 2019

@mborsz @tallclair @dims Move the discussion to this issue.

Maciej, I tried using the same manifest to create a custom cluster. I do see the kube-apiserver.log-XXX.gz.

https://user-images.githubusercontent.com/5606098/55660065-50f04280-57b9-11e9-99d5-fa2e719ccfd3.png

Do you mind showing me how to recreate the scale test env so as I can reproduce the problem?

@yuwenma
Copy link
Author

yuwenma commented Apr 5, 2019

for future reference: you can reference PRs and issues by putting # sign in front of the number, like #76190. If they're cross repository, just put the repository name in front: kubernetes/kubernetes#76190

Thanks Solly!

@mborsz
Copy link
Member

mborsz commented Apr 6, 2019

There is a race between kube-logrotate and klog. Klog truncates at 1.8GB and logrotate rotates logs every hour (if log is greater than 100M).

If you fill up logs slowly (less than 1.8GB per hour), the current setup will work. If faster, klog will truncate the log before logrotate even starts.

To reproduce the issue, simply try creating a cluster that produces more than 1800M per hour. In scalability tests we are using --v=3 so each request is logged. This increases log volume significantly, but we need that to debug tests.

@mattjmcnaughton
Copy link

Just my opinion here, but I think having klog split the log files when it grows to 1.8 GB would be a violation of separation of concerns. In my mind, klog should just be responsible for writing to the file, and an external process should handle log rotation.

Would it be possible to tweak the settings of kube-logrotate so that it rotates if the file is larger than XGB, instead of on a time basis?

@dims
Copy link
Member

dims commented Apr 8, 2019

@mattjmcnaughton as mentioned in kubernetes/kubernetes#76190 (comment)

we can support the SIGHUP on logrotate using say using https://github.com/client9/reopen

@yuwenma yuwenma changed the title Enhance klog to split up the log files when it grows to 1.8GiB Avoid log truncation due to log-rotation delays (when logfiles exceeds 1.8GB faster than log rotation) Apr 8, 2019
@yuwenma
Copy link
Author

yuwenma commented Apr 8, 2019

Just my opinion here, but I think having klog split the log files when it grows to 1.8 GB would be a violation of separation of concerns. In my mind, klog should just be responsible for writing to the file, and an external process should handle log rotation.

Would it be possible to tweak the settings of kube-logrotate so that it rotates if the file is larger than XGB, instead of on a time basis?

Thanks for pointing it out, Matt. You are right. I updated the title and description (will have another update after we confirm the detailed solution).

@yuwenma
Copy link
Author

yuwenma commented Apr 8, 2019

Hi everyone, thanks for jumping in and helping on this issue. After gathering the different pieces of knowledge from everyone and discussed with @dims earlier today, I'd like to give a quick summary about what's going on here and to clarify some potential misunderstanding.

Root Cause:

We use logrotate to split log-files periodically (every hour) and compress the logs to a new .gz file when the log-file grows too large (exceed 100MB). At the same time, klog has its own max log size protector (in case to keep the latest log message) that cleans up the log-file completely when the file size exceeds 1.8GB. After the cleanup, the klog continues writing to the same log-file. [In certain cases](https://github.com/kubernetes/kubernetes/pull/76190) , the log-file grows to 1.8GB quickly even before the hourly logrotate runs. Thus, earlier logs are missed.

Note:

  1. During this process, klog can always write messages to log-file, so no worry to recover klog from writing to a closed file.
  2. Current logrotate config uses "copytruncate", "daily" (cron runs in an hourly bases) and "maxsize"=100M

Solution Option 1:

Update the logrotate cronjob to run more frequently so as the log can be stored in a different file before the log-file exceeds 1.8GB . e.g. running every 20 minutes instead of hourly.
  • Pros: easiest and fastest. The cloud provider has the full control of their log This is also an essential approach since the log-file is a cloud-provider custom config. If log-file flag is not used, then this klog and logrotate mismatch problem shouldn't exist as well.
  • Cons: The frequency of logrotate can vary due to different k8s usage. If the clusters has light workload, then frequent logrotate isn't necessary. (QQ: Is there any harm on running logrotate more frequently?). And potentially it can cause more log message missing.

Solution Option 2:

Loosen the 1.8GB limitation. Either change the constant variable to a larger number(e.g. instead of cleaning up log-files when exceeds 1.8GB, do the cleanup in 5GB) or make it as a configurable variable(e.g. --cleanup-at-size=1.8GB).
  • Pros: Easy to fix. The cloud provider has more control on the klog performance.
  • Cons: Loosen the max-size can fix the encountered problem but doesn't solve the root cause. It buries error-prone information into the code, provides a flag for a narrow use case, and makes the code unnecessary complicated.

Solution Option 3:

Disable the log-file cleaning up completely (No 1.8GB limitations). The klog won't have logic about how to handle large file size.
  • Pros: This avoid the conflict when another progress is also trying to handle large file size (logrotate).
  • Cons: No failure plan for large log size issue sounds skeptical; Besides, @dims notified some known concerns on this approach

Solution Option 4:

Instead of cleaning up the 1.8GB data completely, klog can rename the file to something like kube-apiserver-TIMESTAMP.log and continue writing new logs to log-file.
  • Pros: Easy fix. Straightforward approach.
  • Cons: @dims suggested known permission issues on klog creating new files. (write permission may not be and won't be granted from the cloud provider)

Solution Option 5:

Abandon ubuntu logrotate. Implement a klog versioned logrotation.
  • cons: too much work comparing to the others.

This is just a rough explanation. Let me know if anything's unclear. @mborsz @wojtek-t Thoughts? Is there any concerns if we go with Solution 1? 😄

@dims
Copy link
Member

dims commented Apr 8, 2019

I would definitely support option 1 and option 2.

Even if option 1 solves the problem, option 2 (i'd prefer say log-file-max-size) will help others who wish to use log-file instead of log-dir. We still haven't figured out where we got the magic 1.8GB limit came from either, so we can have a way for those who wish to bump up.

So between running logrotate more frequently and the specification of max size, we should be able to support the scalability use case.

@mborsz
Copy link
Member

mborsz commented Apr 9, 2019

For option 1, please note that we used to logrotate every 5 minutes, but we had to revert this change to hourly (I don't know the details; see kubernetes/kubernetes#74915).

Personally, I support option 3. I'm not the logging expert here, but I think the current pattern (kube-apiserver writes to stderr and we redirect to a file regardless of size) in k8s worked just fine. We can do the same here and figure out something better if needed later. It's also the simplest one and separates responsibility well between klog and logrotate.

@wojtek-t
Copy link
Member

wojtek-t commented Apr 9, 2019

@jpbetz - what are the details behind reverting back to logrotate every 1h?
If this is possible option (1) sounds fine to me.

I'm not a fan of option (2) - it introduces yet another knob to configure. Also, even if it works now, it may no longer work if we put N times more load on apiserver and I would like to avoid thinking about yet another things that should be tweaked.

I actually agree with Maciej, that option (3) sounds most promissing.

Option (4) and (5) mean that we simply start reimplementing log rotation in klog - I'm not sure we really want to reinvent the wheel..

@yuwenma
Copy link
Author

yuwenma commented Apr 9, 2019

From the input of @dims and @mborsz, I think what we can do is 1.) add the flag that allows klog owners defining the log-file-max-size. 2.) set the default value as unlimited (same performance as option 3 -- disable the max size limitation). Does it sound reasonable?

At the same time, I'd like to try increasing the logrotate frequency. From the kubernetes/kubernetes#74915, I can't find out a clear explanation about why 5-minute logrotate causes the audit log test flakiness. @jpbetz Do you mind me trying to use 20 minutes? I can revert if it continuously causes test flakiness. The thing is that this log truncation is a main blocker for the k8s image rebasing.

@dims
Copy link
Member

dims commented Apr 9, 2019

@yuwenma

+1 to1.) add the flag that allows klog owners define the log-file-max-size

for 2.) set the default value as unlimited we should make sure the unlimited is ONLY for log-file (not for log-dir)

y, let's drop the logrotate 5 mins thing, i vaguely remember the problem, but don't remember that we ever got to a root cause (possibly the frequent compression was taking up cpu?)

@yuwenma
Copy link
Author

yuwenma commented Apr 9, 2019

hmmmm, Why not for "log-dir"?
I think setting the default value to unlimited is an adjustment for all the use cases that a fast growing file is losing log message when reaches 1.8GB. Is there any specific concerns for log-dir?

@dims
Copy link
Member

dims commented Apr 9, 2019

@yuwenma we don't want to change how log-dir works. log-file is new functionality so lets just tweak for that

@jpbetz
Copy link

jpbetz commented Apr 9, 2019

@wojtek-t The revert back to 1hr logrotate happened in the 11th hour of the 1.14 release due to some (poorly written, IMO) tests relying on on the current log containing specific information. We'd need to hunt the exact tests down since I don't have details handy. My recollection is that the tests need to be rewritten to either not rely on logs to check for conditions, or also check the rotated so they are resilient to having the log rotated out from under them.

FWIW, I think if we fix the flaking tests, reapplying #72062 should get us plenty of millage. The idea is simple: Just check if the logs need to be rotated more often so that the policy we intended (new day starts, log file size > 100MB) actually gets implemented. If we need to change the policy, that's fine too.. But going from checking every 1hr to every 5m should solve the bulk of the large file problems.

Sorry I haven't been able to follow up on this, if someone can pick up the task, that would be great.

@yuwenma
Copy link
Author

yuwenma commented Apr 9, 2019

Thanks @jpbetz, sounds like it's safer to wait until the test is improved before changing the logrotate frequency.
Since this klog issue currently blocks the rebasing images, I'll go head and do what @dims suggested (customize the max log file size).
Besides, I also checked with the original author for the magic "1.8GB" limitation: (from Rob) it's added to "avoid 32-bit signed integer overflow". Then it's kind of save to set the default log-file-max-size value to unlimited for now.

@tallclair
Copy link
Member

It looks like the test that was causing problems was the advanced audit test, which has since been marked flaky (i.e. it no longer runs as part of the blocking test suite). So it should be safe to move forward with the increased logrotate frequency.

@jpbetz
Copy link

jpbetz commented Apr 9, 2019

Thanks @timstclair, reapplying via kubernetes/kubernetes#76352

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants