-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design pluggable "distro" packaging mechanism for building idp stack flavors #5
Comments
@greghaynes @nabuskey @nimakaviani This issue will probably be of interest to you. |
I agree that making it configurable via a config file would be a great addition. That said, I think we will need to use / support Argo CD resource hooks because there will be a use case which require some form of imperative stuff. In addition, in cases where ArgoCD hooks are not enough like this issue, we could:
As for application images. Do we expect end users to build their own idp package as an image instead of argo apps? |
So a first question that we should address here is about the format to be used to ship the resources: OCI Bundle image vs Helm tar.gz file vs Helm chart vs plain resources (see ArgoCD directory). If there is mutual agreement to adopt the OCI bundle format (which is also an option using Helm), then proposing tools for the CNOE Packages/Components is the natural second step and using Remark: Should we define what we deploy on the IDPBuilder as |
I think we should focus on supporting plain yaml based approaches like helm and kustomize via GitOps. We could use a OCI image to bundle these files up, but that's just a way to get these files. Practically though, using OCI bundles or not, it adds extra dependencies when used within CI jobs. You need a way to pull up-to-date manifests from somewhere like repos or registries to ensure w/e you are building works with the IDP that's deployed. e.g. backstage template validation. |
I fully agree. VMWare Tanzu uses such OCI bundles to package their YAML + YTT files to be deployed on the platform. Prior to install the platform it is needed to download 500Mb-1Gb of images and when issues occur, then you have to extract the content of the OCI images to figure out why you have an issue x.y.z with the package a.b.c which relies on the installation of the YAML resources processed by YTT. |
I have liked Helm Charts over raw yaml resources or kustomize personally. Storing Helm Charts in an OCI compliant registry (in an AWS implementation this could be ECR) or you can store a tgz in Git if desired. The ability for values to be dynamically pulled from AppSets/Generators I think makes Helm Charts much more powerful when you start dealing with things like multi-cluster environments, ephemeral environments, etc;.
The term component to me makes more sense. These are components being deployed into a K8s cluster. You could think of components like how Backstage defines them in their ecosystem model https://backstage.io/docs/features/software-catalog/system-model#ecosystem-modeling |
Some more thoughts. Suppose we have a directory of ArgoCD apps from a real cluster. When we point idpbuilder at it, it should update relevant fields like repo url and path, then apply it to the local cluster. Another thought is on imperative stuff. Since we are hard requiring ArgoCD, we could use resource hooks like Jesse mentioned and helm hooks. Is this enough? Do we want to include a spec for running imperative stuff? |
A hook is a Kubernetes Job which is executed before/after/... synchonisation - https://argo-cd.readthedocs.io/en/stable/user-guide/resource_hooks/#overview. Take care that if sync is executed, then hooks will be re-executed. Is it something that we would like to do for component that we want to install "one time" as singleton ? |
I agree. From this point of view, we can be inspired by what Tanzu TAP is doing to set up their IDP. They don't propose (anymore) a local tool able to create a kind cluster and their requirement is simply to have:
Next, it is needed to install some tools if you plan to relocate the images (imgpkg) to a local registry like also to re-generate resources (ytt). This step is optional. See : https://github.com/halkyonio/tap/blob/main/scripts/tap.sh#L346-L349 When done, you will have only to install 2 controllers top of your cluster (= the prerequisites) able to manage the:
Remark: Kapp is similar to ArgoCD around some concepts like to handle a group of yaml resources as an Application, etc Next, it is needed to register the catalog which contains the components composing their IDP = Tanzu Application Platform, their default values to be installed: https://github.com/halkyonio/tap/blob/main/scripts/tap.sh#L408 To customize the installation of your TAP, you will have to create a config file containing the values that you would like to override for each component: https://github.com/halkyonio/tap/blob/main/scripts/tap.sh#L413 Thats' all as now you can install the components using 2 commands: https://github.com/halkyonio/tap/blob/main/scripts/tap.sh#L518-L530 Some of the ideas described here could become part of the IDP builder too ;-) |
ok I spent some time reading through the conversations across multiple threads. I am tempted to suggest that we should make a hard dependency on having Argo CD and Argo Workflows installed out of band (how we do it now), and from there on assume for all the imperative and declarative installations and accompanied tasks to be handled via a combination of Argo Applications and hooks on Argo Workflows. This also ties it back to #14 where I think the idea of standardizing on Argo apps or app of apps makes a lot of sense. Also reflecting back on how TAP is doing it and given the portfolio of current users of CNOE, I think Vault needs to be a relatively immediate addition to the list of tools / controllers we deploy. I am thinking, in the next iteration of extending thoughts? |
+1 like also to install too the certificate manager to generate self signed certificate + TLS key files |
Do you want to use argo workflow(s) to performs tasks such as configuring a component, etc post installation of the resources done by ArgoCD (e.g Application CRs) ? Have you considered to use instead Tekton as Pipeline(s)/Task(s) engine ? |
+1 Also, I wonder if we should more clearly define the relationship between idpbuilder and the 'reference applications' (or whatever name we give to the sets of argo apps + k8s packages). Specifically, that idpbuilder is an optional tool to support local development and CI use cases and therefore is not a dependency of CNOE reference implementations. |
Yep - this is how we plan to use it for our controlplane CI as well and I added a flag to disable embedded argo app installation for this reason (https://github.com/cnoe-io/idpbuilder/blob/main/api/v1alpha1/localbuild_types.go#L20). e.g. in our CI we have a set of kustomize packages for installing crossplane which we'd like to use for the time being. Also, I imagine if we were to adopt a cnoe reference implementation for this it would make more sense to still install a kustomize package in this way which in turn references the cnoe reference implementation as a base. |
yes, to Greg's point the goal for the first iteration is to ensure that we get the reference implementation ported over to idpbuilder. deploying cert-manager and vault are prereqs of this porting over.
We certainly have considered Tekton and you can find some initial references here. The main flavor of the CNOE IDP will aim to target the core technologies we have implemented in the the reference implementation. From there on, we will extend to other technologies like Tekton, Flux, etc. |
We need to be able to allow for our different teams to build out their reference IDP stacks while not requiring all of them to be "hard coded" into the builder's "Embedded Apps".
idpbuilder/pkg/apps/resources.go
Lines 20 to 32 in 56089e4
There are a number of ways this can be done. We should document a design proposal and alternatives / prior art. We should open it up for discussion in a PR and then ratify it with other interested orgs. We may or may not solution a straw man during discussion, but we should not merge to main until the design is agreed upon.
Some food for thought:
We should target the aws reference implementation install scripts as the first "flavor" of the idp distro since it is the most feature complete. See https://github.com/cnoe-io/reference-implementation-aws-user-friendly/blob/main/setups/backstage/install.sh
We can possibly make use of argo apps resource hooks to do imperative commands before the argo apps reconcile. See https://argo-cd.readthedocs.io/en/stable/user-guide/resource_hooks/
There are some issues with how the lifecycle hooks execute and when they are blocking vs not see the discussion here: argoproj/argo-cd#9891
We want the idpbuidler to be a single binary with no dependencies other than docker.
We can possibly make use of Carvel imgpkg to bundle all of the apps and their resource definitions into a single image that can be used offline without a registry: https://carvel.dev/imgpkg/docs/v0.24.0/basic-workflow/#step-4-use-pulled-bundle-contents
Sealer is another project that has similar single binary like packaging of kubernetes and associated apps running on top of it. However it does seem to require the use of a registry? There may be some interesting prior art there especially in how it wraps k0s.
See: https://github.com/sealerio/sealer
See: https://www.alibabacloud.com/blog/sealer-a-kubernetes-based-distributed-application-package-%26-delivery-tool_599065
A research paper on kubernetes packaging systems: https://oparu.uni-ulm.de/xmlui/bitstream/handle/123456789/38625/promis20_01_baur.pdf?sequence=1
The text was updated successfully, but these errors were encountered: