-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split Client server into two images #227
Conversation
@JasonPowr I don't like the concept of having two pods at all TBH. I know you have reason to do that so I am thinking about having both container running in the single pod. That would need containers to serve on different port. Would it be possible to configure it using ENV variable or so? |
I agree its not the nicest solution, I'm not sure if I can configure it using a Env var but ill look into it. The main reason for the separate pods is because Apache web sever automatically binds to port 8080 by default, and when running the two images in the same pod it results in a port binding error |
@JasonPowr stepping back what is the why? To make the change |
@cooktheryan I believe its because the client server image got too big during TP2 and was breaking builds: https://issues.redhat.com/browse/SECURESIGN-624 |
@bouskaJ So I don't think there is a way I can set it up using env vars, If we don't want it running two pods I believe we may have two options We could do something like this:
Or we could move the sed command to the build process of the second client server image, wdyt? |
Yes - it's not the image itself, but the |
IMO the container should default to 8080. The reason we are having to do all of these shenanigans is because we want both servers to run from the same pod. So I think having the |
yes, I agree with @lance I would not hack the container image. I still think that having both in one pod (including the hack) is less messy than having two deployments but let's hear from others @osmman @cooktheryan WDYT? |
Did you think about usage of sidecars? Basically to have one containers with HTTP server and sidecars with binaries |
Good idea @osmman I like it! Here is a nice example with nginx server https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/ |
+1 @osmman idea that would be a very smooth way to solve this |
@bouskaJ @osmman So to do this Ill need to make changes to the client-server images and remove the HTTP aspects of them, before I begin the process are you looking for something like this or am I misunderstanding?
|
this is a very smooth implementation of this idea +1 |
33f14be
to
be17920
Compare
/test tas-operator-e2e |
/retest |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: JasonPowr, osmman The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
This pr makes the necessary changes to the client sever, after the split one contains cosign and gitsign the other contains rekor-cli and ec