-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[coredump] Refactor info and dump collection #3811
Conversation
Congratulations! One of the builds has completed. 🍾 You can install the built RPMs by following these steps:
Please note that the RPMs should be used only in a testing environment. |
So this one turned out more interesting than I thought. The dump size reported by From that, we can just collect the compressed coredumps, and then drop a symlink in the plugin dir to help users align what coredump they're looking for from the output of |
e6e9965
to
1e5a263
Compare
Ubuntu 18.04 failures are due to python version. We recently agreed to move to 3.8 as minimum version. cirrus.yaml should be updated shortly when the GCE image for the new Ubuntu dev release is available, per iRC:
PR on hold til cirrus text matrix is updated. |
1e5a263
to
346789e
Compare
sos/report/plugins/coredump.py
Outdated
core = re.search(r"(^\s*Storage:(.*)(\(present\)))", res, re.M) | ||
try: | ||
core_path = core.groups()[1].strip() | ||
self.add_copy_spec(core_path, tailit=False, sizelimit=100) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we skip collecting the core due to its >100M size, will cores_collected
be incremented? (ideally it shouldnt)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good catch of a current blindspot in tailit
handling. We don't actually give any indication to the caller that the collection was skipped. Will dig on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it worth having an option that allows it to copy a bigger size? I had a coredump that was in 2GB in size recently, and we needed it. I think it would be worthwhile that as an option? thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm hesitant to add plugin options around size limiting. Though...what about extending log_size
to a per-plugin setting similar to timeout
?
sos/report/plugins/coredump.py
Outdated
|
||
plugin_name = "coredump" | ||
profiles = ('system', 'debug') | ||
packages = ('systemd-udev', 'systemd-coredump') | ||
|
||
option_list = [ | ||
PluginOpt("detailed", default=False, | ||
desc="collect detailed information for every report") | ||
PluginOpt("dumps", default=5, desc="number of dump files to collect"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Collecting 5 coredumps as a default? Isn't it too much?
I am bit inclined to 3 as the default, but no strong preference (and no real use case / user experience how many coredumps are worth to take).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5 was a shot in the dark number, no problem with dropping to 3.
Refactor the plugin to adjust how `coredumpctl info` and coredump file collection is handled. Plugin will now collect the compressed coredump file for coredump entries for which coredumpctl reports that the file is present, for the first X number of coredump files where X is the new `dumps` plugin option. A second new plugin option, `executable`, can be used to specific a regex string to match coredump entries against to determine if they should be collected or not. The default behavior is to collect for all entries. A symlink for coredump file collections will be dropped in the plugin directory to aide in associating dump files with coredumpctl entries, as it may not always be obvious for end users based on filenames alone. Signed-off-by: Jake Hunsaker <[email protected]>
346789e
to
c8e9655
Compare
Dropped number of cores from 5 to 3. Bumped the core size default to 200 since 100 can reasonably be considered small for coredumps. There's not a great way to detect skips from a_c_s today, so inserted a |
Refactor the plugin to adjust how
coredumpctl info
and coredump file collection is handled.Plugin will now collect the compressed coredump file for coredump entries for which coredumpctl reports that the file is present, for the first X number of coredump files where X is the new
dumps
plugin option.A second new plugin option,
executable
, can be used to specific a regex string to match coredump entries against to determine if they should be collected or not. The default behavior is to collect for all entries.A symlink for coredump file collections will be dropped in the plugin directory to aide in associating dump files with coredumpctl entries, as it may not always be obvious for end users based on filenames alone.
Please place an 'X' inside each '[]' to confirm you adhere to our Contributor Guidelines