-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design proposal for sync improvements #1844
Design proposal for sync improvements #1844
Conversation
@tejal29 I can't finish this right now, but I'll try to find time later today. The section about config change might already be interesting. |
Codecov Report
@@ Coverage Diff @@
## master #1844 +/- ##
=======================================
Coverage 52.07% 52.07%
=======================================
Files 179 179
Lines 7923 7923
=======================================
Hits 4126 4126
Misses 3415 3415
Partials 382 382 Continue to review full report at Codecov.
|
Currently, the pipeline config is a map of local glob pattern to destination directory. This scheme has been extended with some magic sequences, making it difficult to understand. This commit converts the sync map into a list of sync rules. All sync rules are consulted to determine the destination paths. This prepares for further changes to the sync logic. Also see GoogleContainerTools#1844 Signed-off-by: Cornelius Weig <[email protected]>
Currently, the pipeline config is a map of local glob pattern to destination directory. This scheme has been extended with some magic sequences, making it difficult to understand. This commit converts the sync map into a list of sync rules. All sync rules are consulted to determine the destination paths. This prepares for further changes to the sync logic. Also see GoogleContainerTools#1844 Signed-off-by: Cornelius Weig <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks amazing.
I have 2 open questions which we need to address before taking it out to
- How will Syncer use the new proposed DependenciesForArtifact source, destination map
- How will Syncer use the SyncMap map?
What if the DependenciesForArtifact contain sources which need to be rebuilt.
This may even have to be implemented upstream. | ||
Until we can support those builders, we to handle the case when a user tries to use inference with those builders. | ||
|
||
- Option 1: bail out with an error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bail out with "not yet supported error"
It would be great if we can bail out at the start but checking if "SyncMap" is implemented and the Builder who don't implement it throw an error.
Something like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to throw an error, we must do it upfront during the Skaffold pipeline config validation. Otherwise we cannot differentiate between dependencies that are not copied into the container (e.g. Dockerfile) and builders that can not provide destinations paths.
But I also prefer Option 1, because it does not surprise users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This situation seem similar to a situation when the destination cannot be inferred by the builder.
FROM scratch
ADD foo baz
RUN mv baz bar
FROM scratch
COPY --from=0 bar bar
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on error - preferably at schema validation time to fail fast but could be just inference time too
This comment has been minimized.
This comment has been minimized.
78f49c8
to
b14125e
Compare
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
Signed-off-by: Cornelius Weig <[email protected]>
b14125e
to
c1c6ae2
Compare
@tejal29 I'm done for now. At your convenience, please have a second look. |
@corneliusweig This looks good. The next skaffold bi-weekly meeting is on 3rd April 9:30 am to 10:am Please join https://groups.google.com/forum/#!forum/skaffold-users to get meeting invite. |
great! this is ready to merge once kokoro build goes green |
Thanks again to everyone who shaped this design proposal! In particular @tejal29 who initiated the whole design proposal process. Doing these revisions on the PR has helped a lot! |
Currently, the pipeline config is a map of local glob pattern to destination directory. This scheme has been extended with some magic sequences, making it difficult to understand. This commit converts the sync map into a list of sync rules. All sync rules are consulted to determine the destination paths. This prepares for further changes to the sync logic. Also see GoogleContainerTools#1844 Signed-off-by: Cornelius Weig <[email protected]>
Currently, the pipeline config is a map of local glob pattern to destination directory. This scheme has been extended with some magic sequences, making it difficult to understand. This commit converts the sync map into a list of sync rules. All sync rules are consulted to determine the destination paths. This prepares for further changes to the sync logic. Also see GoogleContainerTools#1844 Signed-off-by: Cornelius Weig <[email protected]>
Currently, the pipeline config is a map of local glob pattern to destination directory. This scheme has been extended with some magic sequences, making it difficult to understand. This commit converts the sync map into a list of sync rules. All sync rules are consulted to determine the destination paths. This prepares for further changes to the sync logic. Also see GoogleContainerTools#1844 Signed-off-by: Cornelius Weig <[email protected]>
@tejal29 Do you think this is ready to be merged? |
@dgageot can you please have a look - it would be great to have your input before we jump in implementing this. |
Let's merge this in. @dgageot can comment on the actual PR later, and we'll adjust accordingly. |
Let me throw in an additional wrinkle that may be worth thinking about. I glossed over some details previously when I described how Jib copies from For Skaffold to handle this properly, Skaffold would need to know if a builder requires a rebuild prior to performing a sync. It would require separating |
Currently, the pipeline config is a map of local glob pattern to destination directory. This scheme has been extended with some magic sequences, making it difficult to understand. This commit converts the sync map into a list of sync rules. All sync rules are consulted to determine the destination paths. This prepares for further changes to the sync logic. Also see GoogleContainerTools#1844 Signed-off-by: Cornelius Weig <[email protected]>
@briandealwis Just to clarify, I think "build" is used in two different contexts:
So I guess,
could be rephrased to
Correct? That kind of change probably needs to be picked up by the JIB team. However, the whole thing will only work, if the container allows some hot-swapping or hot-reloading of updated files. Otherwise, the main process in the container needs to be restarted, which kind of defeats the whole point of sync. Does JIB support a hot-swapping mode? |
You're right @corneliusweig; I just used "rebuild" as a shorthand as for Jib-based projects, as a container build performs a recompile. It might be advantageous for the builder interface to support compile and build (to build the container image), but it seems easy enough to just use build since generating a container image is cheap. It's not a real stretch to imagine Skaffold being able to sync files from a built container image: it's just a matter of trawling through the layers. The hot-swapping/reloading is something that needs to be supported by the language runtime. The JVM does support it via helper libraries like Spring Boot Dev Tools. |
* Improved pipeline config for artifact.sync Currently, the pipeline config is a map of local glob pattern to destination directory. This scheme has been extended with some magic sequences, making it difficult to understand. This commit converts the sync map into a list of sync rules. All sync rules are consulted to determine the destination paths. This prepares for further changes to the sync logic. Also see #1844 * Update sync spec config according to design proposal - Wrap sync-rules under key 'sync.manual' - Config changes: - from -> src - to -> dest - flatten option removed and warn during schema upgrade if an incompatible pattern is migrated to the new schema version * Add validation for sync rules * Migrate sync config to new schema version * Update filesync documentation Extract example into sample snippet in order to have it tested. * Run validation on doc examples * Review comments: add test case and clarify example in docs * Improve doc and error message wording Signed-off-by: Cornelius Weig <[email protected]>
@loosebazooka (I hope this is the correct github handle :) We talked yesterday about the magic/smart sync mode. |
@corneliusweig, yeah this is what I was looking for. |
) * Improved pipeline config for artifact.sync Currently, the pipeline config is a map of local glob pattern to destination directory. This scheme has been extended with some magic sequences, making it difficult to understand. This commit converts the sync map into a list of sync rules. All sync rules are consulted to determine the destination paths. This prepares for further changes to the sync logic. Also see GoogleContainerTools#1844 * Update sync spec config according to design proposal - Wrap sync-rules under key 'sync.manual' - Config changes: - from -> src - to -> dest - flatten option removed and warn during schema upgrade if an incompatible pattern is migrated to the new schema version * Add validation for sync rules * Migrate sync config to new schema version * Update filesync documentation Extract example into sample snippet in order to have it tested. * Run validation on doc examples * Review comments: add test case and clarify example in docs * Improve doc and error message wording Signed-off-by: Cornelius Weig <[email protected]>
As discussed in #1812