-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filters seem to be ignored during unmanaged-files inspection #2193
Comments
I started this again and get nearly the same error again, but this time it is "open /var/spool/cups/c105580: no such file or directory". I guess that temporary files make machinery crash. But how could I possibly inspect a running host without having temporary files? |
Thx for your report. So the issue seems that the files are there during inspection but get removed before extraction. For now you could either inspect without extraction or filter the directories by using using The only thing that wonders is me how Can you check if |
Hm, when I enter When I enter
|
The Did the inspection work fine with the |
It still runs... It caused a load of 55+ on the inspected host, which became unresponsive, so I had to kill the machinery processes. I started again, and I excluded the second (100 GB) and third (1 TB) file system, so it should care about the / filesystem (100 GB) only. |
And just in this moment it crashes again:
So, I gave |
Thx for the update. Yes, this is a second issue. For the first we already have an entry under #2188 that's why I have changed the title of this issue accordingly. |
I was able to reproduce the issue and created a work around for it. I will let you know as soon as this is released. To reproduce the issue you can run the following command inside a managed repository while inspecting unmanaged-files with extraction:
|
We have released version 1.22.2, which should fix your issue, at least when you use filtering. Could you verify it? |
It does not take so long to break machinery now :(
|
Thx for the report. I could not reproduce the issue yet but will look into it. Until then can you try if the unmanaged-file issue is fixed by running:
You could also try to remove some of the filters, but there still might be an issue during extraction if you do not filter the volatile directories. But with filtering it should not crash anymore. To inspect everything but repositories you can use the
|
Is it possible that you have added new repositories in the meantime because I do not think that we touched this area of the code, at least not directly? Do you have the same issue with an older version? |
I switched from the machinery rpm in the Leap Update 42.1 repo to the systemsmanagement:machinery repo, since this was the only way I found an rpm file for 1.22.2. Alas, this version again ignores the filters - with
It doesn't crash (yet), but it still ignores the filters. BTW, is there any chance to start machinery on the inspected host with an |
Yes, the helper binary does not support filters yet, we just fixed the crashes in case of removed/inaccessible files. The result is then filtered later on. Using a lower priority does make sense I guess from the start but you should be able to change it manually if |
Oh, I see. Bad luck then, I guess. If there is no chance to filter from scanning, machinery is unusable for me. The 1 TB volume is shared via NFS among all of my hosts. sapdisk is the NFS server (hence its hostname). The volumes takes hours to scan, maybe a backup comes in during inspection, and hundreds of files are removed and brought back (all in the I know about the In the comparable case of the seccheck script, I patch it manually: first, the device is removed from the device list, and then the cron job gets an |
Thx for your debugging. Can't you get the process id of the machinery-helper on the inspected host with Anyway I will create a new issue for the machinery-helper filter support. The crash at least should be fixed by the patch release. Regarding the repository issue, can you create a new issue, and mention there if you added new repositories and if not if the inspection works with the current Leap version? Thx in advance. |
One thing that wonders me is that by default |
Ah, OK, didn't test that, just thought that "since the helper binary does not support filters yet" means it scans all available filesystems, but it does so with local fs only. The fs is mounted via automount/NFSv3 on ~20 hosts. So, the problem with the gigantic unneeded filescan is on my central host sapdisk only. Sounds better now :) I'll try |
This issue can be closed I guess. I have already created a new issue for the filter support for machinery-helper. |
machinery inspect -x sapdisk
Inspecting sapdisk for os, packages, patterns, repositories, users, groups, services, changed-config-files, changed-managed-files, unmanaged-files...
Note: There are filters being applied during inspection. (Use
--verbose
option to show the filters)Inspecting os...
-> Found operating system 'SUSE Linux Enterprise Server 11' version '11 SP4'.
Inspecting packages...
-> Found 1879 packages.
Inspecting patterns...
-> Found 24 patterns.
Inspecting repositories...
-> Found 79 repositories.
Inspecting users...
-> Found 77 users.
Inspecting groups...
-> Found 81 groups.
Inspecting services...
-> Found 175 services.
Inspecting changed-config-files...
-> Extracted 158 changed configuration files.
Inspecting changed-managed-files...
-> Extracted 105 changed managed files.
Inspecting unmanaged-files...
2016/11/07 15:43:34 open /tmp/.security.xl6yCR/: no such file or directory
Machinery experienced an unexpected error. Please file a bug report at: https://github.com/SUSE/machinery/issues/new
Execution of "ssh root@sapdisk -o LogLevel=ERROR LANGUAGE= LC_ALL=en_US.utf8 /root/machinery-helper --extract-metadata" failed with status 1 (error output streamed away).
`Backtrace:
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/bundle/ruby/2.1.0/gems/cheetah-0.4.0/lib/cheetah.rb:555:in `check_errors'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/bundle/ruby/2.1.0/gems/cheetah-0.4.0/lib/cheetah.rb:364:in `run'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/lib/logged_cheetah.rb:23:in `run'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/lib/remote_system.rb:92:in `run_command'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/lib/machinery_helper.rb:60:in `run_helper'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/plugins/unmanaged_files/unmanaged_files_inspector.rb:85:in `run_helper_inspection'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/plugins/unmanaged_files/unmanaged_files_inspector.rb:59:in `inspect'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/lib/inspect_task.rb:87:in `block in build_description'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/lib/inspect_task.rb:79:in `each'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/lib/inspect_task.rb:79:in `build_description'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/lib/inspect_task.rb:21:in `inspect_system'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/lib/cli.rb:640:in `block (2 levels) in class:Cli'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/bundle/ruby/2.1.0/gems/gli-2.13.1/lib/gli/command_support.rb:126:in `call'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/bundle/ruby/2.1.0/gems/gli-2.13.1/lib/gli/command_support.rb:126:in `execute'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/bundle/ruby/2.1.0/gems/gli-2.13.1/lib/gli/app_support.rb:296:in `block in call_command'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/bundle/ruby/2.1.0/gems/gli-2.13.1/lib/gli/app_support.rb:309:in `call'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/bundle/ruby/2.1.0/gems/gli-2.13.1/lib/gli/app_support.rb:309:in `call_command'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/bundle/ruby/2.1.0/gems/gli-2.13.1/lib/gli/app_support.rb:83:in `run'
/usr/lib64/ruby/gems/2.1.0/gems/machinery-tool-1.22.0/bin/machinery:41:in `<top (required)>'
/usr/bin/machinery:24:in `load'
/usr/bin/machinery:24:in `
'The text was updated successfully, but these errors were encountered: