-
Notifications
You must be signed in to change notification settings - Fork 562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gateway API: add an annotation to control the 15021 status port in the generated Service #3400
base: master
Are you sure you want to change the base?
Gateway API: add an annotation to control the 15021 status port in the generated Service #3400
Conversation
😊 Welcome @vpedosyuk! This is either your first contribution to the Istio api repo, or it's been You can learn more about the Istio working groups, Code of Conduct, and contribution guidelines Thanks for contributing! Courtesy of your friendly welcome wagon. |
Hi @vpedosyuk. Thanks for your PR. I'm waiting for a istio member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternative: install option for Istiod ( I assume someone using status port won't want it only for a subset of gateways).
If there is really a common use cases for users to fine tune each gateway with a different port - some design doc or survey on what other gateway implementations (in K8S) are using for status and health checking, before defining a new Istio only API would be great, so at least we know what we reinvent.
AFAIK - we already have a way to fine tune the service, I don't remember if we documented the naming scheme or is internal only - but like alpha/experimental it would be hard to change, and users are already familiar with configuring Service.
/ok-to-test |
Do you need this configuration per gateway? I tend to agree install option seems better |
Hi! An install option is indeed an alternative. I actually started with this approach, but then it seemed to me that having a dedicated annotation might be a slightly more intuitive approach. It would nicely fit with the existing I suppose there's one pretty common use case when an Istio
Such an internal annotations:
networking.istio.io/service-type: "ClusterIP"
networking.istio.io/service-expose-status-port: "true" I think it's a perfectly possible situation when a user would apply the above scheme for all incoming HTTP traffic (e.g. to integrate with GCP Cloud Armor seamlessly). But then for TCP traffic like Kafka or some special HTTP traffic, they'd expose another annotations:
networking.istio.io/service-type: "LoadBalancer"
networking.istio.io/service-expose-status-port: "false" I don't think the install option would be sufficient for this use case. |
…e generated Service
58d68d8
to
dafd0ee
Compare
@vpedosyuk: The following test failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
@costinm I've checked multiple Gateway API implementations and it seems like only the Envoy Gateway implements deep So when it comes to the I suppose implementation of that API in Istio is being discussed in #46594. I don't know if it'd take into account this particular issue with the 15021 port. Though, it'd be great to avoid any service spec yaml patches through parametersRef just to remove this conflicting port from my GKE loadbalancers 😅. |
I understand the concern about your gateway, but we would have to maintain this API for a very long time, expose the status port as API ( which I think is more implementation detail and doesn't have a standard or documented behavior), and add more complexity on whatever is reading the status port. Perhaps a clean approach would be to define a new Listener - because it is an actual listener - we could use the name in the code to identify the intent to be an status port. No API change, and it would work for the other 'magic ports' we have (capture, etc). A benefit is that other policies could be attached, maybe even routes and security. The question would be how to indicate internal handlers - maybe using reserved names for the backendRef, like x-status or x-prometheus. The much simpler approach is to customize the injection template or add a mesh config option.... |
This PR adds a new
Gateway
annotation -networking.istio.io/service-expose-status-port
. This is related to istio/istio#54453. This is a dependency for istio/istio#54525