-
Notifications
You must be signed in to change notification settings - Fork 900
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Update upload/download workflow for MiqS3Session #17798
[WIP] Update upload/download workflow for MiqS3Session #17798
Conversation
I agree with the idea that backing up locally then uploading subsequently is sub-optimal based on the file size argument. At a first glance I don't like the idea of the specific class showing through in this part of the code as in the ideal case, the caller should be able to use any of the sessions identically. As a separate design issue I don't think the actual upload or download should be done in the With all that in mind, it seems to me that the method of uploading/downloading the file (especially if there is some "trick" to it like splitting) might be better suited to living in the actual session object as the implementation of Again, this is just a first thought and we can discuss more in the meeting. I already feel like this could be difficult because of the way the file splitting is effectively "streaming" the backup file to the "mount", but maybe we can alter the interface of |
@carbonin A few comments in line as a rebuttal. I would rather have these talking points on paper for myself or I will forget them:
No problem, and I left out many details in the description since I had mentioned it else where (out of band), but I will comment on my rational in the subsequent points:
I get this, but this is the special case here, and was specifically what broke when running this with splits. In the previous versions of the code, the rake task did not need to know that the file output happened, because that responsibility was passed on to As the person that refactored to make
I humbly disagree. The
Again, my big issue with this is S3 is NOT a "filesystem", or at least can easily quack like one. The use case that existed here, however, pretty much assumed that Dir.chdir do
`pg_dump ...` # (or `pg_basebackup`/`pg_restore`)
end Again, FTP has the exact same problems as
The problem with using I will say, I 100% agree with you concern about I do, again, think we should have a distinction between "mountables" and "upload endpoints" as neither |
d4991a0
to
d251995
Compare
In EvmDatabaseOps, this change makes it so only the S3 mount will attempt an upload/download on the necessary actions, and the rest will remain as they were.
d251995
to
24cbe08
Compare
Checked commit NickLaMuro@24cbe08 with ruby 2.3.3, rubocop 0.52.1, haml-lint 0.20.0, and yamllint 1.10.0 lib/evm_database_ops.rb
|
Going to set this as Might switch from piping to a sub process to streaming back into the main process, and the interface might change a bit from what we have here. |
As I am pretty confident at this point the way moving forward and the comment above will be addressed by this PR: ManageIQ/manageiq-gems-pending#361 I am going to close this PR as a result. It is referenced elsewhere if need it, but I doubt we will go in this direction. Plus it is taking up yet another tab in my browser... |
This is an update to the changes made in #17689 in regards to the changes to
EvmDatabaseOps
only.The changes here are necessary to allow #17652 to function again, which will be updated to include these changes as well.
Background
The process in which the FTP and S3 support for database backups took two different approaches to handling them.
For S3 route, it was planned to treat it as a psuedo mount, but that meant that the upload for the backup, and the download for the restore had to be done manually (unlike the other mounted filesystems).
The plan for splitting files (and eventually FTP), was to use the file splitter to pipe the results into the FTP endpoint, and that would work the same for the mounts (sans the FTP bit) since they are just file systems and it will dump the files locally as it does.
Links
Steps for Testing/QA
Like my other PRs, I have been using the gist from above to test the changes against all the current file systems in the console. This does not, however, test
MiqS3Session
since I was unaware of it existing previously, so testing that will need to happen separately.