-
Notifications
You must be signed in to change notification settings - Fork 738
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider exposing more ports in generated service objects in situations where pod has more than one port #171
Comments
The current workaround is to create a dedicated ClusterIP service that maps the scraping port. |
The Metrics and Logs section in the Istio FAQ also mentions this scrape target and mTLS with some additional information in case it's relevant to the use-case of this issue. As @stefanprodan mentioned, adding a dedicated service that maps the scraping port solves the issue for now without changes to Flagger. Is it maybe worth adding some info on this to the Flagger GKE Istio docs meanwhile? |
Starting with v0.15.0 Flagger generates destination rules and the port has been removed from the virtual service. The only place now where the port is being specified is in the ClusterIP object. |
I have a similar issue where Im exposing a grpc service via 8000 and a grpc rest gateway proxy via 8080. Since the gateway proxy is generated via protoc we always version it together with the application and distribute as a single entity. Separate ports are being used because the grpc service is java based. Previously I had handled the routing via a virtual service definition similar to this apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-service
spec:
hosts:
- "my-service.mydomain.com"
gateways:
- public-gateway
http:
- match:
- uri:
prefix: "/rest/"
rewrite:
uri: "/"
route:
- destination:
host: my-service.prod.svc.cluster.local
port:
number: 9090
- match:
- uri:
prefix: /our_base_rpc_package
route:
- destination:
host: my-service.prod.svc.cluster.local
port:
number: 9000 Im having a little difficulty coming up with a suitable work around using the available configuration in the canary crd. Any suggestions? I wonder if supporting HTTPRoute in the canary resource would lower the effort of adoption? |
The HTTPRoute can't be exposed in the CRD since Flagger needs to create the primary and canary routes and modify the weight. Assuming your want to run the canary analysis for the Run the canary analysis on port 9090: apiVersion: flagger.app/v1alpha3
kind: Canary
spec:
service:
port: 9090
# do not add the mesh gateway below or the merge will break
gateways:
- public-gateway.istio-system.svc.cluster.local
hosts:
- my-service.mydomain.com
match:
- uri:
prefix: "/rest/"
rewrite:
uri: "/" Expose port 9000 with a ClusterIP that points to the primary pods: apiVersion: v1
kind: Service
metadata:
name: my-service-grp
spec:
type: ClusterIP
selector:
app: my-service-primary
ports:
- name: grpc
port: 9000
protocol: TCP Expose the grpc service with a virtual service on the same external host and gateway (Pilot should merge the two virtual services): apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-service-grpc
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
hosts:
- my-service.mydomain.com
http:
- match:
- uri:
prefix: /our_base_rpc_package
route:
- destination:
host: my-service-grpc.prod.svc.cluster.local
port:
number: 9000 |
That makes total sense now thinking about it. I think this example should work just fine. Ill try it out now. Thanks for the help |
As discussed on Slack previously:
When using GKE with Istio, and a Prometheus deployment configured according to the GKE Istio docs, you end up with a Prometheus instance with a custom scrape target
kubernetes-pods-istio-secure
that scrapes all pods that has an Envoy sidecar. In situations where you have a deployment where the scrape target port is different from the main service port (eg. you have a GRPC service on port 3000 and a HTTP service on port 3001 containing aGET /metrics
endpoint) this leads to Prometheus no longer being able to scrape metrics from the Canary managed deployment/services, as port 3001 (in this example) is not part of the mesh since no service points to it.To work around this, Flagger should consider exposing all ports mapped up in the pod definition in the generated service objects - but only use the port specified in the CRD when wiring up the virtual service.
The text was updated successfully, but these errors were encountered: