Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(proposal) Transitioning from Mixins to Custom Remote Invocation Images #3239

Open
1 of 2 tasks
jmcudd opened this issue Oct 17, 2024 · 6 comments
Open
1 of 2 tasks

Comments

@jmcudd
Copy link
Contributor

jmcudd commented Oct 17, 2024

What design is being proposed?

The proposal suggests transitioning Porter's architecture from utilizing mixins to adopting custom invocation images that can be referenced remotely. The primary motivation for this design is to eliminate the complexities of creating a custom mixin and shift towards using custom invocation images. This encourages community-driven contributions, supports a broader range of languages and technologies, and aligns with Docker's universally accepted standards.

Additionally, invocation images will automatically mount the porter.yaml file, reducing or eliminating the need for manual "wiring" of porter.yaml metadata, streamlining the process for developers.

Publicly available invocation images, much like the mixin CLI, would be indexed and searchable, creating a vibrant ecosystem of pre-built images for various devops stacks such as Kubernetes, Helm, Terraform, etc.

Additional Context

The proposed transition will chiefly impact areas such as the CLI, porter.yaml manifests, and the way bundles are defined and managed. This change allows for more flexible use of remote invocation images within the porter.yaml, enabling reuse and promoting community-created standard images.

The existing custom Dockerfile functionality can serve as a precursor to new custom invocation images. Developers can refine custom Dockerfiles into invocation images, which can then be shared through community platforms or repositories, enhancing collaboration and tool accessibility.

Risks/Concerns

  • Transition Complexity: There may be challenges for users transitioning to this new model, necessitating well-documented migration strategies and extensive user support.
  • Security and Integrity: Introducing remote invocation images requires careful management to ensure the authenticity and integrity of images used. Strict validation, security layers, and possibly image signing will be critical defenses.
  • Development Workload: Increased demands on development and maintenance are expected to support the transition, requiring additional resources for systematic upgrades and tool creation.

What other approaches were considered?

Alternative approaches included maintaining a hybrid model to offer legacy support for mixins. This would reduce disruption but involve maintaining dual models, potentially complicating the overall system architecture. Another approach was enhancing existing mixins to make them easier to use, but this was seen as an insufficient measure given the greater flexibility and community engagement custom invocation images offer.

Implementation Details

Upon acceptance, the proposal would make changes centered around the porter.yaml file, allowing direct specification of custom remote invocation images. This encourages the development and uptake of community-driven solution images. Below is an example configuration to illustrate this concept:

name: my-porter-bundle
version: 1.0.0
description: Example managing infrastructure and Kubernetes invocation images.
invocationImages:
  - name: terraform-tools
    image: ghcr.io/myorg/terraform-invocation-image:latest
  - name: helm-tools
    image: ghcr.io/myorg/helm-invocation-image:stable

install:
  - description: "Install Infrastructure resources"
    invoke:
      - name: terraform-tools
      - action:
          run: terraform apply -auto-approve
  - description: "Deploy application to Kubernetes"
    invoke:
      - name: helm-tools
      - action:
          run: helm install myapp ./myapp-chart

Additional Examples

Example 1: Kubernetes Deployment Workflow

name: k8s-deployment-bundle
version: 1.0.0
description: A bundle handling the deployment, upgrade, and removal of a Kubernetes application.

invocationImages:
  - name: kubernetes-tools
    image: ghcr.io/devteam/k8s-invocation-image:latest

install:
  - description: "Deploy application to Kubernetes cluster"
    invoke:
      - name: kubernetes-tools
      - action:
          run: kubectl apply -f app-deployment.yaml

upgrade:
  - description: "Update Kubernetes deployment configuration"
    invoke:
      - name: kubernetes-tools
      - action:
          run: kubectl set image deployment/app app=app:v2

uninstall:
  - description: "Remove application from Kubernetes cluster"
    invoke:
      - name: kubernetes-tools
      - action:
          run: kubectl delete -f app-deployment.yaml

Example 2: Infrastructure Management with Terraform

name: infra-management-bundle
version: 1.1.0
description: A bundle that manages infrastructure using Terraform for provisioning, updating, and teardown.

invocationImages:
  - name: terraform-tools
    image: ghcr.io/infra/terraform-invocation-image:latest

install:
  - description: "Provision infrastructure"
    invoke:
      - name: terraform-tools
      - action:
          run: terraform init && terraform apply -auto-approve

upgrade:
  - description: "Apply Terraform updates to infrastructure"
    invoke:
      - name: terraform-tools
      - action:
          run: terraform apply -auto-approve

uninstall:
  - description: "Destroy infrastructure"
    invoke:
      - name: terraform-tools
      - action:
          run: terraform destroy -auto-approve

Example 3: CI/CD Pipeline with Docker

name: cicd-bundle
version: 2.0.0
description: A CI/CD bundle that sets up, updates, and clews up a Docker-based application.

invocationImages:
  - name: docker-tools
    image: ghcr.io/container/docker-invocation-image:stable

install:
  - description: "Build and run Docker container for initial deployment"
    invoke:
      - name: docker-tools
      - action:
          run: |
            docker run -d -p 80:80 --name myapp-container ${bundle.images.myapp.repository}@${bundle.images.myapp.digest}

upgrade:
  - description: "Upgrade the Docker container to a new version"
    invoke:
      - name: docker-tools
      - action:
          run: |
            docker stop myapp-container
            docker rm myapp-container
            docker run -d -p 80:80 --name myapp-container ${bundle.images.myapp.repository}@${bundle.images.myapp.digest}

uninstall:
  - description: "Stop and remove the Docker container"
    invoke:
      - name: docker-tools
      - action:
          run: |
            docker stop myapp-container
            docker rm myapp-container

Checklist

  • An announcement of this proposal has been sent to the Porter mailing list: https://porter.sh/mailing-list
  • This proposal has remained open for at least one week, allowing time for community feedback.
@jmcudd jmcudd changed the title Transitioning from Mixins to Custom Remote Invocation Images (proposal) Transitioning from Mixins to Custom Remote Invocation Images Oct 17, 2024
@kichristensen
Copy link
Contributor

Hi @jmcudd, this is an interesting proposal.
There exist a similar PEP 005, and the initial proposal for the PEP can be found here. The PEP was never completed, but I think it is worth taking some of the same considerations into account.

@kichristensen
Copy link
Contributor

We also have to consider how this would affect the Kubernetes Operator. Would it require running Docker inside the pod?

@schristoff
Copy link
Member

I think we could start a PoC as a feature (that way it can be disabled within operator for now) - but @kichristensen this is very similar to PEP 005 which is why I would be happy to see some traction on this

@jmcudd
Copy link
Contributor Author

jmcudd commented Oct 28, 2024

Some of the ideas I have about this are borrowed from kpt and Tekton which both leverage containerized functions heavily. I mean most CI/CD pipelines these days use containers for each step. Github Actions, Concourse, etc. all use containerization for encapsulation of specific CI/CD tooling.

@kichristensen
Copy link
Contributor

kichristensen commented Oct 28, 2024

I think we could start a PoC as a feature (that way it can be disabled within operator for now) - but @kichristensen this is very similar to PEP 005 which is why I would be happy to see some traction on this

I agree a PoC would be create. The thing we need to to understand in more detail is the impact of invocation images vs. bundles. Invocation images requires Docker to run them (at least at the moment), bundles in itself technically don't (depending on how it would implemented). The impact migth be different depending on the choice.

@lbergnehr
Copy link
Contributor

My few cents are that mixins and prebuilt images kind of aim for the same end result: a built docker image tailored to the steps of the bundle. I think making it possible to run different images in each step/action would be great. However, I do think there are features of mixins that would not be possible by only using pre-built invocation images, such as the meta programming aspects that a mixin has (generating the Dockerfile). The terraform mixin, for example, initializes providers for you automatically into the target invocation image.

My suggestion is to keep them both, somewhat like docker compose, which lets you specify an image or a Dockerfile that defines the image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants