-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pluggable pakcages #31
Conversation
|
||
#### Use OCI images as applications | ||
|
||
Projects such as Sealer and Kapp aim to use OCI images as the artifact to define and deploy multiple Kubernetes resources. This has a few advantages. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kapp doesnt need OCI images. Kapp + imgpkg allows to use OCI images.
This document proposes the following: | ||
- Make ArgoCD a hard requirement. | ||
- Define packages as Argo CD Applications (Helm, Kustomize, and raw manifests) | ||
- Imperative pipelines for configuring packages are handled with ArgoCD resource hooks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it the best approach to use Argocd Hooks to execute kubernetes jobs pre/post sync of resources vs using a real supply chain/pipeline tool as Tekton or Argo workflow ? A drawback to use hook is that they are executed for every resource sync and don't offer a kind a singleton hook which is only executed when a new application is installed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like also to mention that we should as much as we can use the argoCd pattern App of Apps
- https://medium.com/dzerolabs/turbocharge-argocd-with-app-of-apps-pattern-and-kustomized-helm-ea4993190e7c
Using the ApplicationSet and their generators will also help us to customize the Applications CR generated using by example the version of the pluggable package to be installed like its Helkm chart, etc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can make Argo CD hooks run Argo workflows.
ideally actions taken as part of executing argo workflows will be no-ops in subsequent executions. but if that happens not to be the case, then maybe we consider triggering and running argo workflows separately. either way, I agree that Argo Workflows are better suited for doing imperative work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep Application Sets are a great idea to use with the in-cluster git server, especially for reference implementation stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"we can make Argo CD hooks run Argo workflows."
There are some limitations using hooks = kubernetes jobs as ArgoCD delete them end like the pods of their execution. So, if users would like to view the logs, then this is impossible. If the job fails, then again it is also very difficult to access/vies the log of the newly pod created from a job as argocd kill/recreate them -every x seconds
metadata: | ||
name: pi | ||
postCreation: | ||
... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we include jobs, how such a reconciliation will work as installing a package will require to of course let argoCd to install the resources within the target cluster but also that what we call jobs succeeded.
|
||
1. Complexity. | ||
|
||
This introduces a completely new mechanisms to manage applications. Current idpbuilder design is very simple with no concept of pipelining. It allows end users to define manifests at compile time, and apply them as Argo CD applications. Introducing this feature and maintaining it may involve significant time commitment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having such a mechanism is definitively a plus as today creating a kubernetes platform is really a pain :-(
Even VMWare Tanzu TAP don't propose such a great solution top of their kapp + ytt + catalog solution
Remarks @nabuskey
|
docs/pluggable-packages.md
Outdated
### Runtime Git server content generation | ||
|
||
As mentioned earlier, Git server contents are generated at compile time and cannot be changed at run time. | ||
To solve this, Git content should be created at run time by introducing a new flag to idpbuilder. This flag takes a directory and builds an image with the content from the directory. If this flag is not specified, use the embedded FS to provide the "default experience". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some added support for this - our internal CI is a go CLI which depends on idpbuilder and implements exactly this :).
Should we consider requiring signed commits for our reference "packages" as argo cd apps so that the idpbuilder can be built to trust only official reference packages? https://argo-cd.readthedocs.io/en/release-1.8/user-guide/gpg-verification/ we can also allow users to configure their own gpg keys to use for verification as flag to |
Did not know about this. We should 100% do this for our reference packages. My current thinking around git server stuff:
|
Ok I've incorporated ideas posted above. TLDR:
|
About Git server contents: Is mirroring content from remotes to in-cluster git server the right way to go? This becomes a bit more complicated once you start dealing with nested remotes. Especially for helm. We pretty much need to:
This breaks signing on charts as well, but not sure if it matters for local. If we are to render them, things are a bit simpler.
This means we have to make sure we are using the same process to render manifests as ArgoCD does. So more libraries, dependencies, and potentially deal with rendering logic. We are now pairing a specific version of ArgoCD to a release. We can't guarantee manifests are render the same if a different version of ArgoCd is deployed to the cluster than the one in use by idpbuilder This also pretty much won't allow you to take advantage of FluxCD source mechanisms IF we decide to support Flux in the future. Also note that to support ArgoCD app of Apps, we need to render locally. Because app of apps could be inside a Helm Chart, Kustomize, or plain manfiests. We can also pass local credentials to the cluster but I am not sure if that's a good practice. I can't be sure which credentials should be passed. Passing all would not be a good idea imo. |
I don't see any drawback to rendering and then pushing to local gitserver. Am I missing something? Is it simply that we would have a dependency on helm as a lib in idpbuilder? |
Yeah pretty much. Dependencies on Helm and Kustomize libraries. We'll probably have to get the rendering logic from ArgoCD so this will add more dependencies. Thinking about it. Another concern is that ArgoCD doesn't do rendering with this approach. So we have to make sure rendering logic is the same as ArgoCD. This could introduce a case where the library version of ArgoCD doesn't match the version of ArgoCD installed. |
@nabuskey Would it make sense only to mirror contents which are referenced by a local path? In the case of an argo app which already has a remote it might be ok to just use that remote directly. The main win for the gitserver IMO is removing the dependency on having to push packages to some external mirror, but if the package is already pushed this shouldnt be an issue. |
I agree with this, I think the only issue is how much we can keep the code paths between local and remote as unified as possible. I would hate to have to handle two separate "pipelines" of artifact herding when there is a design choice that could result in only 1 being necessary. However, based on @nabuskey comment about needing to match the rendering to argo's means that the tradeoff complexity might be more of a burden. I will take a back seat on this decision as you guys know this plumbing better than me. My only concern is that we have a great local workflow for developers that doesn't require pushing to a remote just to test changes during quick iteration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One minor nit comments, LGTM
docs/pluggable-packages.md
Outdated
## Proposal | ||
|
||
This document proposes the following: | ||
- Make ArgoCD a hard requirement. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we could expand on this - I think we should make ArgoCD a hard requirement of our installers (at least for the time being), but we should also maintain separation between packages and CD tooling. e.g. I'd expect there to be separate backstage packages (helm/kustomize) from the Argo apps which install them. With this we leave the door open to supporting CD alternatives in the future. If this sounds reasonable could we include a line about "we will strive to maintain separation between packages and specific CD technologies (argo) to allow for future CD plug-ability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds reasonable to me. For this implementation, we will hard require ArgoCD but there is no reason this cannot be changed in the future.
Signed-off-by: Manabu Mccloskey <[email protected]>
Signed-off-by: Manabu Mccloskey <[email protected]>
Signed-off-by: Manabu Mccloskey <[email protected]>
Signed-off-by: Manabu Mccloskey <[email protected]>
Signed-off-by: Manabu Mccloskey <[email protected]>
Signed-off-by: Manabu Mccloskey <[email protected]>
Signed-off-by: Manabu Mccloskey <[email protected]>
Signed-off-by: Manabu Mccloskey <[email protected]>
Signed-off-by: Manabu Mccloskey <[email protected]>
I dont think that argocd supports accessing local directory as resources are managed by the argocd repo server (shards - redis). |
To extend the proposition of @nabuskey to use internally a git server (= gitea), I would like to mention these points too:
|
100% agreed. Being able to visualize and interact with git is valuable.
This was my intention as well. We will need to make ArgoCD manage itself but should be easy. I will clarify that in the doc.
I think these make sense as this project matures. I have a few concerns at this stage where we want to focus on use cases in local settings.
These are great discussion points to follow up after this PR. I think we should keep it simple and nail the local experience first. I think it's a low hanging fruit that we can immediately see its values on. |
Signed-off-by: Manabu Mccloskey <[email protected]>
closes: #5