-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
operator v1: remove obsolete additional container ports #362
Conversation
39a1888
to
4ee4cfe
Compare
for i := 0; i < int(ptr.Deref(r.nodePool.Replicas, 0)); i++ { | ||
for _, n := range additionalNode0Config.Redpanda.AdvertisedKafkaAPI { | ||
ports = append(ports, corev1.ContainerPort{ | ||
Name: getAdditionalListenerPortName(n.Name, i), | ||
ContainerPort: int32(n.Port + i), | ||
}) | ||
} | ||
if additionalNode0Config.Pandaproxy != nil { | ||
for _, n := range additionalNode0Config.Pandaproxy.AdvertisedPandaproxyAPI { | ||
ports = append(ports, corev1.ContainerPort{ | ||
Name: getAdditionalListenerPortName(n.Name, i), | ||
ContainerPort: int32(n.Port + i), | ||
}) | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the function is still needed, and the code should be changed to something like,
for i, n := range additionalNode0Config.Redpanda.KafkaAPI {
ports = append(ports, corev1.ContainerPort{
Name: getAdditionalListenerPortName(n.Name, i),
ContainerPort: int32(n.Port),
})
}
if additionalNode0Config.Pandaproxy != nil {
for i, n := range additionalNode0Config.Pandaproxy.PandaproxyAPI {
ports = append(ports, corev1.ContainerPort{
Name: getAdditionalListenerPortName(n.Name, i),
ContainerPort: int32(n.Port),
})
}
}
It depends on PL tests.
Ultimately we need to open the port that RP broker listens to for Kafka API and proxy. Additional Kafka listener can have different port than internal (9092) and default external listener (30092), we need to open it up. I hope that K8s statefulset controller will consolidate the container ports if duplications.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@paulzhang97 updated.
i also had to change some semantics, i found out that we actually never add the correct port, only the advertised port, as these seem to override each other. so in the end, the advertised port was added to container ports, but it's useless, because nothing listens at this port (it's only a kafka api thing, and for the load balancer).
the definitions of advertised and port in additionalProperties cancelled out each other. but it did not matter, as ports work, even if not declared in container ports.
in the end, it would be much better to properly define listeners in spec.configuration.kafkaAPI, instead of "hacking in" these listeners via env vars passed into the configurator. but this goes beyond this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wondering why it still works even if containerPort
is not set correctly. Per here about contanerPort
, List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
.
I also see it elsewhere mentioning the same. It is for mainly informational purpose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@paulzhang97 AFAIK it's because containerPort
can override the container's exposed ports at runtime. The redpanda
container image has the following ports exposed by default:
$ docker inspect docker.redpanda.com/redpandadata/redpanda:v24.2.11 |jq -r '.[].Config.ExposedPorts'
{
"8081/tcp": {},
"8082/tcp": {},
"9092/tcp": {},
"9644/tcp": {}
}
5128b68
to
94a735e
Compare
@birdayz do you have new operator image? I can help test it with PL/PSC enabled. |
94a735e
to
2875f03
Compare
prior to this patch, a container pod was added to pod spec for every broker that exists. the initial idea was to do this for AWS PrivateLink or GCP PrivateServiceConnect. however, this was never used, and is not required: the original container port - that is the same for every pod - is used. the problem with these ports: they are part of Pod Spec, and therefore added to every pod. Scaling up adds a pod, so adds another container port, and therefore rolls all Pods. This is undesired behavior, and causes unwanted restarts. in addition, advertised port and kafka/proxy port had same names in additionalProperties (pl-proxy for example), and did override each other. the port actually used in AWS PL @ redpanda cloud was never added, only the advertised one (which is obsolete). it did not matter really, because ports work, even if not added to container ports.
2875f03
to
f1407c2
Compare
Tested internally in cloud. Unnecessary Pod rolls don't happen anymore on scale-up. |
@RafalKorepta @chrisseto @andrewstucki this is now ready for review. thanks! |
prior to this patch, a container port was added to pod spec for every broker that exists. the initial idea was to do this for AWS PrivateLink or GCP PrivateServiceConnect. however, this was never used, and is not required: the original container port - that is the same for every pod - is used. these additional ports are not used, nor required (and IMHO not design anyway).
the problem with these ports: they are part of Pod Spec, and therefore added to every pod. Scaling up adds a pod, so adds another container port to every pod's pod spec, and therefore rolls all Pods. This is undesired behavior, and causes unwanted restarts.