-
Notifications
You must be signed in to change notification settings - Fork 451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose collector outside of cluster #902
Comments
We would like to support this for Kubernetes but as well for OpenShift via OCP routes. I did some research for OCP and to make it work the |
Wouldn't this typically be done on k8s via deployment of a |
Another option is adding k8s Ingress support. Some ingress controllers have a good support for gRPC as well |
Hey what would be a suitable solution for this? I have summarized my thoughts here. Would be very happy about your feedback. Original post in duplicated issuesAfter creating an collector as As an extension of the CRD, I was thinking of something like this ingress entry in the following code snipped. (Inspired by the Skupper operator[1]) File // Mode represents how the collector should be deployed (deployment, daemonset, statefulset or sidecar)
// +optional
Mode Mode `json:"mode,omitempty"`
// Ingress is used to specify how OpenTelemetry Collector is exposed. This
// function is only available if one of the valid modes is set.
// Valid modes are: deployment, daemonset and statefulset.
// +optional
Ingress struct {
// Type default value is: none
// Supported types are: route/loadbalancer/nodeport/nginx-ingress-v1/ingress
Type string
// Hostname by which the ingress proxy can be reached.
Hostname string
...
} What are your thoughts on this?
Might be related: |
After playing around i have now ended up with this CR extension. What are your thoughts? This would eliminate a few manual steps on the receiver side. In open-telemetry/opentelemetry.io#1684 I wrote all the manual steps down. Details..// Ingress is used to specify how OpenTelemetry Collector is exposed. This
// functionality is only available if one of the valid modes is set.
// Valid modes are: deployment, daemonset and statefulset.
type Ingress struct {
// Type default value is: none
// Supported types are: route/loadbalancer/nodeport/ingress
Type string
// IngressClassName is the name of the IngressClass cluster resource.
ClassName string
// Hostname by which the ingress proxy can be reached.
Hostname string
// Protocol used in exposed backend.
Protocol string
// Annotations to add to ingress separated by comma,
// e.g. 'cert-manager.io/cluster-issuer: "letsencrypt"'
// +optional
Annotations string
// TLS configuration.
// +optional
TLS []networkingv1.IngressTLS
} |
@frzifus, +1 for your idea. |
@frzifus Looks like Kubernetes already has all types for ingress. Can we just reuse it in your implementation? Also, It will be better to have annotations as map[string]string. For example in AWS ingress controller i have 8-10 different annotations per ingress |
Can someone summarize the current status regarding what's supported and what's not? I'm really confused. I'd appreciate if there is a sharable successfully story with details to follow. I happened to work on this subject in the past week, and came here after reading the blog, which is posted about ten days ago. According to the blog, it sounds like Ben(@frzifus) successfully made an otlp grpc connection over an nginx ingress between two otel collectors. However, the example instructions are questionable to me. For example, in the edge side client otel configuration, the endpoint should have a port number.
Anyway, I'd admit I'm still struggling to create an otlp connection over a traefik ingress, no matter grpc or https. I've successfully made an https connection over the traefik ingress to browse an https server. I thought the otlphttp would be something similar, but I was wrong. I got some success, but still have questions and problems. otlphttpClient OTEL
Server OTEL
Service
Ingress
I started with the configuration above. The client otel got 502 errors, and the server otel said the client did not provide a certificate. Then I disabled the service side client certificate verification by removing As you may see, the ingress routing path has to be "/". If I change it to something like (Client exporter endpoint to be
Anyway, I'm still having two issues with this otlphttp over ingress:
otlpgrpc configI'm having much more troubles with otlpgrpc. Client OTEL
Server OTEL
Service
Ingress
Then I started grpc test with the configuration above. Like https, the Client got Then delete I certainly need a lot helps here. |
@yuriolisa yes, depending on the selected type, an ingress or route entry should be created.
@sergeyshevch Do you mean we should embed
Sounds ligitimate. I don't have a strong opinion on that.
Hi @PengWang-SMARTM, the goal is to simplify exactly this point. A desirable end result would be to configure a hostname, routing types, annotations and tls in the OTEL-collector CRD. Then the collector should be reachable from outside a cluster. Currently I am trying to figure out how to expose different receivers in an elegant way. One consideration would be to route the various receiver endpoints via the URL. A translation would look like this: receivers:
otlp:
protocols:
grpc:
otlp/2:
protocols:
http:
jaeger:
protocols:
grpc:
thrift_binary:
thrift_compact: Configured hostname:
This way, however, it would be difficult to allow different TLS configurations for different collectors. What are your thoughts? @yuriolisa @pavolloffay and all the others? Regarding the issue you faced with exposing the collector:
I don't really have experience with traefik, but can it be that you need to enable tls passthrough? It seems that the tls certificate is no longer present, that traefik has already removed it. In the setup described in the blogpost I had the same problem and only had a tls connection up to the nginx proxy.
Thanks for the info. In fact the port number was missing. I have fixed the error. Did you notice anything else? |
@frzifus , I'm new to Kubernetes, and I haven't got a chance to learn how to use CRD yet. Since I have a working otlphttp connection, I'll come back to the otlp grpc connection at later time. Regarding the grpc configuration mentioned in your blog, I suspect That's because, from Ingress point of view, like routing http(s), it needs something to distinguish the target pod:port so it knows where to forward the incoming requests from port 443. Unless only one grpc routing is allowed through Ingress, there has to be something like the path for http (or subdomain), or each target pod:port needs a unique port number. Regarding the certificate/502 issue, I feel it's something with certificate passthrough too. I read across, and noticed nginx has an annotation to enable the passthrough. However, I did not find the equivalent on traefik yet. |
In the example case it is specified in a ingress rule: rules:
- host: <REPLACE: your domain endpoint, e.g.: "[email protected]">
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: otel-collector-app-collector
port:
number: 4317 There you can map different paths to different service port numbers. Since v0.61.0 there is an Ingress entry in the CRD. Based on the specified ports and collectors, rules for routing the traffics are automatically created. Example: $ kubectl apply -f tests/e2e/ingress/00-install.yaml
$ kubectl describe ingress
...
Rules:
Host Path Backends
---- ---- --------
example.com
/otlp-grpc simplest-collector:otlp-grpc (10.244.0.15:4317)
/otlp-http simplest-collector:otlp-http (10.244.0.15:4318)
... Currently I am working on OpenShift route support and came across the ingress-to-route controller[method]. What do you think, would it make senes to expose the |
How are people actually leveraging the ingress to get data into a OTEL collector running in a cluster now? I am running into similar issues as @frzifus
What am I missing here and how are people dealing with the above? The way I have worked around this now is to manually patch the ingress path to |
@fredthomsen this is not quite clear. By default, the operator will use Short of adding support to receivers, exporters and SDKs, you can utilize apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otlc-main
spec:
mode: deployment
config: |
receivers:
otlp:
protocols:
grpc:
processors:
exporters:
debug:
service:
pipelines:
metrics:
receivers: [otlp]
processors: []
exporters: [debug]
ingress:
type: ingress
ruleType: subdomain
annotations:
cert-manager.io/cluster-issuer: lets-encrypt # Replace with your issuer
nginx.ingress.kubernetes.io/backend-protocol: GRPC
ingressClassName: public
hostname: 'otlc.example.org'
tls:
- hosts:
- 'otlp-grpc.otlc.example.org'
secretName: otlc-main-ingress-tls
The ingress will be created as follows:
Obviously, this is a ClusterIP service with pretty much out-of-box configuration, so you'd be defining the endpoint as |
Hello, i am trying to expose collector's service as NodePort service. new Manifest(this, 'opentelemetry_collector', {
manifest: {
apiVersion: 'opentelemetry.io/v1alpha1',
kind: 'OpenTelemetryCollector',
metadata: {
name: appName,
namespace: namespace
},
spec: {
ports: [
{
appProtocol: "grpc",
name: "otlp",
nodePort: grpcReceiverNodePort,
port: grpcReceiverPort,
protocol: "TCP",
targetPort: grpcReceiverPort
},
{
appProtocol: "http",
name: "otlp-http",
nodePort: httpReceiverNodePort,
port: httpReceiverPort,
protocol: "TCP",
targetPort: httpReceiverPort
}
],
config: `
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:${grpcReceiverPort}"
http:
endpoint: "0.0.0.0:${httpReceiverPort}"
exporters:
otlp/openobserve:
endpoint: ${config.grpcEndpoint}
headers:
Authorization: Basic ${config.auth}
organization: ${config.stackName}
stream-name: default
tls:
insecure: true
insecure_skip_verify: true
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp/openobserve]
logs:
receivers: [otlp]
exporters: [otlp/openobserve]
`
}
}
}); Looks like version: |
@a0s, I have the same problem in @pavolloffay, I don't think this is fixed yet. The PR you mentioned (#1206) seems to be about OpenShift. |
Use case: a user wants to send telemetry data to the OTEL collector from outside of the cluster - e.g. mobile clients or just a different cluster.
The text was updated successfully, but these errors were encountered: