Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split Client server into two images #227

Merged
merged 1 commit into from
Mar 4, 2024

Conversation

JasonPowr
Copy link
Contributor

@JasonPowr JasonPowr commented Feb 28, 2024

This pr makes the necessary changes to the client sever, after the split one contains cosign and gitsign the other contains rekor-cli and ec

@openshift-ci openshift-ci bot requested review from bouskaJ and sallyom February 28, 2024 11:58
@bouskaJ
Copy link
Collaborator

bouskaJ commented Feb 28, 2024

@JasonPowr I don't like the concept of having two pods at all TBH. I know you have reason to do that so I am thinking about having both container running in the single pod. That would need containers to serve on different port. Would it be possible to configure it using ENV variable or so?

@JasonPowr
Copy link
Contributor Author

@JasonPowr I don't like the concept of having two pods at all TBH. I know you have reason to do that so I am thinking about having both container running in the single pod. That would need containers to serve on different port. Would it be possible to configure it using ENV variable or so?

I agree its not the nicest solution, I'm not sure if I can configure it using a Env var but ill look into it. The main reason for the separate pods is because Apache web sever automatically binds to port 8080 by default, and when running the two images in the same pod it results in a port binding error

@cooktheryan
Copy link
Collaborator

@JasonPowr stepping back what is the why? To make the change

@JasonPowr
Copy link
Contributor Author

@JasonPowr stepping back what is the why? To make the change

@cooktheryan I believe its because the client server image got too big during TP2 and was breaking builds: https://issues.redhat.com/browse/SECURESIGN-624

@JasonPowr
Copy link
Contributor Author

@bouskaJ So I don't think there is a way I can set it up using env vars, If we don't want it running two pods I believe we may have two options

We could do something like this:

Containers: []core.Container{
	{
		Name:            cliServerName,
		Image:           image,
		ImagePullPolicy: "IfNotPresent",
		Ports: []core.ContainerPort{
			{
				ContainerPort: 8080,
				Protocol:      "TCP",
			},
		},
		Command: []string{"/bin/sh", "-c"},
		Args: []string{
			"sed -i 's/Listen 0.0.0.0:8080/Listen 0.0.0.0:8081/' /etc/httpd/conf/httpd.conf && run-httpd",
		},
	},
},

Or we could move the sed command to the build process of the second client server image, wdyt?

@lance
Copy link
Member

lance commented Feb 28, 2024

@JasonPowr stepping back what is the why? To make the change

@cooktheryan I believe its because the client server image got too big during TP2 and was breaking builds: https://issues.redhat.com/browse/SECURESIGN-624

Yes - it's not the image itself, but the source-image-build task results are too large for konflux. This is because it was including all of the source for cosign, gitsign, rekor-cli and ec. Once we added ec to the mix it went over the task result size limit, and that was not something easily fixable by the konflux team iiuc.

@lance
Copy link
Member

lance commented Feb 28, 2024

We could do something like this:

Containers: []core.Container{
	{
		Name:            cliServerName,
		Image:           image,
		ImagePullPolicy: "IfNotPresent",
		Ports: []core.ContainerPort{
			{
				ContainerPort: 8080,
				Protocol:      "TCP",
			},
		},
		Command: []string{"/bin/sh", "-c"},
		Args: []string{
			"sed -i 's/Listen 0.0.0.0:8080/Listen 0.0.0.0:8081/' /etc/httpd/conf/httpd.conf && run-httpd",
		},
	},
},

Or we could move the sed command to the build process of the second client server image, wdyt?

IMO the container should default to 8080. The reason we are having to do all of these shenanigans is because we want both servers to run from the same pod. So I think having the sed command to hack the port here in the Container command args is the right place for it. Don't forget to set ContainerPort to 8081 on that Container spec too.

@bouskaJ
Copy link
Collaborator

bouskaJ commented Feb 29, 2024

We could do something like this:

Containers: []core.Container{
	{
		Name:            cliServerName,
		Image:           image,
		ImagePullPolicy: "IfNotPresent",
		Ports: []core.ContainerPort{
			{
				ContainerPort: 8080,
				Protocol:      "TCP",
			},
		},
		Command: []string{"/bin/sh", "-c"},
		Args: []string{
			"sed -i 's/Listen 0.0.0.0:8080/Listen 0.0.0.0:8081/' /etc/httpd/conf/httpd.conf && run-httpd",
		},
	},
},

Or we could move the sed command to the build process of the second client server image, wdyt?

IMO the container should default to 8080. The reason we are having to do all of these shenanigans is because we want both servers to run from the same pod. So I think having the sed command to hack the port here in the Container command args is the right place for it. Don't forget to set ContainerPort to 8081 on that Container spec too.

yes, I agree with @lance I would not hack the container image. I still think that having both in one pod (including the hack) is less messy than having two deployments but let's hear from others @osmman @cooktheryan WDYT?

@osmman
Copy link
Contributor

osmman commented Feb 29, 2024

We could do something like this:

Containers: []core.Container{
	{
		Name:            cliServerName,
		Image:           image,
		ImagePullPolicy: "IfNotPresent",
		Ports: []core.ContainerPort{
			{
				ContainerPort: 8080,
				Protocol:      "TCP",
			},
		},
		Command: []string{"/bin/sh", "-c"},
		Args: []string{
			"sed -i 's/Listen 0.0.0.0:8080/Listen 0.0.0.0:8081/' /etc/httpd/conf/httpd.conf && run-httpd",
		},
	},
},

Or we could move the sed command to the build process of the second client server image, wdyt?

IMO the container should default to 8080. The reason we are having to do all of these shenanigans is because we want both servers to run from the same pod. So I think having the sed command to hack the port here in the Container command args is the right place for it. Don't forget to set ContainerPort to 8081 on that Container spec too.

yes, I agree with @lance I would not hack the container image. I still think that having both in one pod (including the hack) is less messy than having two deployments but let's hear from others @osmman @cooktheryan WDYT?

Did you think about usage of sidecars? Basically to have one containers with HTTP server and sidecars with binaries

@bouskaJ
Copy link
Collaborator

bouskaJ commented Feb 29, 2024

We could do something like this:

Containers: []core.Container{
	{
		Name:            cliServerName,
		Image:           image,
		ImagePullPolicy: "IfNotPresent",
		Ports: []core.ContainerPort{
			{
				ContainerPort: 8080,
				Protocol:      "TCP",
			},
		},
		Command: []string{"/bin/sh", "-c"},
		Args: []string{
			"sed -i 's/Listen 0.0.0.0:8080/Listen 0.0.0.0:8081/' /etc/httpd/conf/httpd.conf && run-httpd",
		},
	},
},

Or we could move the sed command to the build process of the second client server image, wdyt?

IMO the container should default to 8080. The reason we are having to do all of these shenanigans is because we want both servers to run from the same pod. So I think having the sed command to hack the port here in the Container command args is the right place for it. Don't forget to set ContainerPort to 8081 on that Container spec too.

yes, I agree with @lance I would not hack the container image. I still think that having both in one pod (including the hack) is less messy than having two deployments but let's hear from others @osmman @cooktheryan WDYT?

Did you think about usage of sidecars? Basically to have one containers with HTTP server and sidecars with binaries

Good idea @osmman I like it! Here is a nice example with nginx server https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/

@cooktheryan
Copy link
Collaborator

+1 @osmman idea that would be a very smooth way to solve this

@JasonPowr
Copy link
Contributor Author

@bouskaJ @osmman So to do this Ill need to make changes to the client-server images and remove the HTTP aspects of them, before I begin the process are you looking for something like this or am I misunderstanding?

Spec: core.PodSpec{
	Volumes: []core.Volume{
		{
			Name: "shared-data",
			VolumeSource: core.VolumeSource{
				EmptyDir: &core.EmptyDirVolumeSource{},
			},
		},
	},
	InitContainers: []core.Container{
		{
			Name:    "init-shared-data-cg",
			Image:   "quay.io/japower/client_server_test:latest",
			Command: []string{"sh", "-c", "cp -r /opt/app-root/src/clients/* /var/www/html/clients/"},
			VolumeMounts: []core.VolumeMount{
				{
					Name:      "shared-data",
					MountPath: "/var/www/html/clients/",
				},
			},
		},
		{
			Name:    "init-shared-data-re",
			Image:   "quay.io/japower/client_server_test_re:latest",
			Command: []string{"sh", "-c", "cp -r /opt/app-root/src/clients/* /var/www/html/clients/"},
			VolumeMounts: []core.VolumeMount{
				{
					Name:      "shared-data",
					MountPath: "/var/www/html/clients/",
				},
			},
		},
	},
	Containers: []core.Container{
		{
			Name:            cliServerName,
			Image:           "registry.access.redhat.com/ubi9/httpd-24@sha256:965f7b03ae8f45228bad765ce47bc8956711e854213df0e4cee8623d51317b0a",
			ImagePullPolicy: core.PullAlways,
			Ports: []core.ContainerPort{
				{
					ContainerPort: 8080,
					Protocol:      core.ProtocolTCP,
				},
			},
			VolumeMounts: []core.VolumeMount{
				{
					Name:      "shared-data",
					MountPath: "/var/www/html/clients/",
				},
			},
		},
	},
},

@cooktheryan
Copy link
Collaborator

this is a very smooth implementation of this idea +1

@JasonPowr
Copy link
Contributor Author

/test tas-operator-e2e

@JasonPowr
Copy link
Contributor Author

/retest

Copy link

openshift-ci bot commented Mar 4, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: JasonPowr, osmman

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@osmman
Copy link
Contributor

osmman commented Mar 4, 2024

/lgtm

@openshift-merge-bot openshift-merge-bot bot merged commit cfc037b into main Mar 4, 2024
12 checks passed
@osmman osmman deleted the client-server-changes branch April 5, 2024 12:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants