-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workload patching #25
Comments
Hi @maxstepanov, just to make sure, what do you mean by unset the Also, what you want is generating/deploying a |
Hi, @mathieu-benoit, when using HPA it is recommended to remove the The default value for Like so: - uri: template://default/hpa-deployment
type: hpa-deployment
manifests: |
- apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ .SourceWorkload }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .SourceWorkload }}
minReplicas: {{ .Params.minReplicas }}
maxReplicas: {{ .Params.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
patches:
- target:
kind: Deployment
name: "{{ .SourceWorkload }}"
patch: |-
- op: remove
path: /spec/replicas PS: Not sure if Workload |
Makes sense, thanks for the info.
I totally agree with that, it would be very beneficial, to inject or patch
That's a good point: https://github.com/score-spec/score-k8s/blob/main/internal/convert/workloads.go#L212, @astromechza, should we not set/generate this field/value by default? |
Fix pr for replicas=1 is up! |
I totally understand the need for modifying the converted output of score-k8s, and we have a few places where I've seen I'm not clear really why a "resource" is the answer here though. It might need to be some other syntax and semantics that allows users to modify the workload output - similar to how we might allow score workloads to be converted to jobs/cronjobs or custom CRDs. |
@astromechza There are many cases that require Workload patching along side the added manifests. I have a case where i need to create a Secret and mount it as file. I can run If i can specify JSON patches and template together then provisioner implementation is self-contained and easy to understand. That's why This approach would require adding Here is one example. The same this with serviceAccounts, PDBs etc... - uri: template://default/mounted-secret
type: mounted-secret
manifests: |
{{ if not (eq .WorkloadKind "Deployment") }}{{ fail "Kind not supported" }}{{ end }}
- apiVersion: v1
kind: Secret
metadata:
name: {{ .State.service }}
annotations:
k8s.score.dev/source-workload: {{ .SourceWorkload }}
k8s.score.dev/resource-uid: {{ .Uid }}
k8s.score.dev/resource-guid: {{ .Guid }}
labels:
app.kubernetes.io/managed-by: score-k8s
app.kubernetes.io/name: {{ .State.service }}
app.kubernetes.io/instance: {{ .State.service }}
data:
password: {{ .State.password | b64enc }}
patches: |
- target:
kind: Deployment
name: "{{ .SourceWorkload }}"
patch: |-
- op: add
path: /spec/template/spec/volumes
value:
- name: "{{ .SourceWorkload }}"
secret:
secretName: "{{ .SourceWorkload }}" The second route is not to use I can work on this if there is interest. |
Ok cool, this is the use case. We do already have a solution to this particular thing, although it's not documented particularly well. https://github.com/score-spec/score-k8s/blob/26e22116f2a7562d669256bc1d64e9013a6be346/internal/provisioners/default/zz-default.provisioners.yaml#L39C5-L39C5 The template provisioner returns an output key that looks like Then when you mount this as a file in the workload, you use So your example above can look like this: Provisioner:
And your workload could look like
|
That's what I was afraid of when choosing the Secret as example.
What about serviceaccounts? Pdbs? I don't want to expose this to the
developer at all. Add and patch is the regular case. I wish this tool can
meet me in the middle instead of forcing to implement CMD provisioners with
post processing hooks. Oh well...
…On Wed, Sep 4, 2024, 20:06 Ben Meier ***@***.***> wrote:
I have a case where i need to create a Secret and mount it as file. So I
created the template provisioner that adds the Secret to output manifests.
Next i need to patch the volumes into workload. I don't see any way to
accomplish that in template provisioner definition. After all i'd like to
have the code in one place.
Ok cool, this is the use case.
We do already have a solution to this particular thing, although it's not
documented particularly well.
https://github.com/score-spec/score-k8s/blob/26e22116f2a7562d669256bc1d64e9013a6be346/internal/provisioners/default/zz-default.provisioners.yaml#L39C5-L39C5
The template provisioner returns an output key that looks like secret-thing:
{{ encodeSecretRef "my-secret" "my-key" }} this is a way of encoding
Secret Name, and key within the secret.
Then when you mount this as a file in the workload, you use container.x.files[n].content:
${resources.foo.secret-thing}} it will identify that this is a secret,
and will mount it as a volume in the pod.
So your example above can look like this:
Provisioner:
- uri: template://default/mounted-secret
type: mounted-secret
outputs:
reference: {{ encodeSecretRef .State.service "password" }}
manifests: |
{{ if not (eq .WorkloadKind "Deployment") }}{{ fail "Kind not supported" }}{{ end }}
- apiVersion: v1
kind: Secret
metadata:
name: {{ .State.service }}
annotations:
k8s.score.dev/source-workload: {{ .SourceWorkload }}
k8s.score.dev/resource-uid: {{ .Uid }}
k8s.score.dev/resource-guid: {{ .Guid }}
labels:
app.kubernetes.io/managed-by: score-k8s
app.kubernetes.io/name: {{ .State.service }}
app.kubernetes.io/instance: {{ .State.service }}
data:
password: {{ .State.password | b64enc }}
And your workload could look like
...
containers:
example:
...
files:
- target: /mnt/password
content: ${resources.secret.reference}
resources:
secret:
type: mounted-secret
—
Reply to this email directly, view it on GitHub
<#25 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAESDH7CZVTSNXLEVZWXD2DZU44ZPAVCNFSM6AAAAABNDUO6J2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRZGU3TSNRQGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Yeah I agree that it doesn't handle other kinds of links between resources and workloads. In general we should look for ways to have the workload consume a resource, rather than the resource submitting arbitrary patches to workloads. |
As seen on #29.
There's a suggestion that we should be able to access workload annotations or context which describes the output "shape" of the workload since without this the patch may be invalid or unnecessary. |
I've been looking at other Score implementations and similar issues and conversations and I think I have a suggestion here. Similar prior art:
So I think we can probably do a similar thing here: Currently the provisioners have After converting the workload, we search all listed resources and apply any patch outputs in lexicographic order based on the resource name. This mechanism could be re-used by most of our other Score implementations in order to apply meaningful mixins. We currently pass the workload name through to the provisioner, but the provisioner will need to understand more about the structure of the output kubernetes spec in order to generate a meaningful patch. This could easily be done through the params or annotations on the resource for now while we investigate this further. Example provisioners: As an example, here's a workload which requests a score-k8s implementation:
score-compose implementation:
Or something similar to that. This is also generic enough to work for other workload kinds in the future too. Thoughts @mathieu-benoit ? |
I'll see if I can put up a PR for this soon |
Yes, I think Now, I'm wondering with your example, in the Score file, we will need to add a resource dependencies Note: same note/remark for both |
@mathieu-benoit Or just create a type workload an add your custom provisioner for that.
I don't think we should add automatic workload resources that suprise users. |
I'm triyng to implement HPA provisioner and i'm facing the issue with Deployments
spec.replicas
set to 1. Documentation mentions this but doesn't explain why.I'd like to unset it and the only supported option seems to be via patching in cli arguments
I've toyed with a similar tool and ended up implementing Json patches in provisioner definitions.
These returned along side with manifests and run at the end of the rendering pipeline.
This way provisioner functions similar to kustomize's components.
Makes provisioners more self contained and removes the need to monkey-patch with cli arguments or kustomize step later.
What do you think? Was this discussed already somewhere?
The text was updated successfully, but these errors were encountered: