Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support resource annotations for jaeger agent sidecar #169

Closed

Conversation

annanay25
Copy link
Member

Resolves #160

Needs a discussion on what the default values of the sidecar should be, and if its okay for these defaults to appear directly in the sidecar package.

Signed-off-by: Annanay [email protected]

@jpkrohling
Copy link
Contributor

This change is Reviewable

@codecov
Copy link

codecov bot commented Dec 14, 2018

Codecov Report

Merging #169 into master will increase coverage by 0.08%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #169      +/-   ##
==========================================
+ Coverage   89.88%   89.97%   +0.08%     
==========================================
  Files          64       64              
  Lines        3035     3061      +26     
==========================================
+ Hits         2728     2754      +26     
  Misses        207      207              
  Partials      100      100
Impacted Files Coverage Δ
pkg/inject/sidecar.go 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3b6e569...6d1c76b. Read the comment docs.

@annanay25
Copy link
Member Author

TODO: Decide default resource defs for the injected container.

@annanay25 annanay25 changed the title WIP: Add resource defs to jaeger agent sidecar Add resource defs to jaeger agent sidecar Dec 29, 2018
@sgmiller
Copy link

sgmiller commented Jan 2, 2019

I would imagine that for most uses, the sidecar would use a fraction of its host pods resources with some minimum. I can't comment on RAM since I don't know how the agent works to buffer data but CPU could be some fixed ratio like 1/4 the largest CPU request in the pod with a minimum of say 50m? Just throwing this out there.

Otherwise, for simplicity just hardcoding a small request seems best. Would rather the users have to adjust upward.

Copy link
Contributor

@jpkrohling jpkrohling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 1 of 2 files at r2, 1 of 1 files at r3.
Reviewable status: all files reviewed, 3 unresolved discussions (waiting on @annanay25)

a discussion (no related file):
Agree with @sgmiller about having a small but sensible default for the CPU and memory, just not sure what would be a good value. For the memory, we may want to start with 64MB, which should be enough for small deployments.



pkg/inject/sidecar.go, line 22 at r3 (raw file):

	Annotation = "inject-jaeger-agent"
	// LimitCPU is the annotation name for cpu resource limits for a jaeger agent sidecar
	LimitCPU = "jaeger-agent-cpu"

jaeger-agent-max-cpu is probably more accurate


pkg/inject/sidecar.go, line 24 at r3 (raw file):

	LimitCPU = "jaeger-agent-cpu"
	// LimitMem is the annotation name for memory resource limits for a jaeger agent sidecar
	LimitMem = "jaeger-agent-mem"

jaeger-agent-max-mem (or even jaeger-agent-max-memory)

@annanay25
Copy link
Member Author

Just need to punch in the numbers for default values now.

Copy link
Contributor

@jpkrohling jpkrohling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 1 of 1 files at r4.
Reviewable status: all files reviewed, 4 unresolved discussions (waiting on @annanay25)

a discussion (no related file):
Looks like the tests are failing. Other than that, we just need to agree on the default values. Suggestions?


@annanay25 annanay25 force-pushed the resource-defs-for-sidecar branch from da6bb67 to 24f9fc8 Compare January 16, 2019 18:52
Copy link
Member Author

@annanay25 annanay25 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 1 of 2 files reviewed, 4 unresolved discussions (waiting on @jpkrohling and @annanay25)

a discussion (no related file):

Previously, jpkrohling (Juraci Paixão Kröhling) wrote…

Looks like the tests are failing. Other than that, we just need to agree on the default values. Suggestions?

Fixed tests. For a default, is it reasonable to assume the requirements of a t3.micro (2 CPUs, 1 GB Memory) instance?


Copy link
Contributor

@jpkrohling jpkrohling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 1 of 2 files reviewed, 4 unresolved discussions (waiting on @annanay25 and @jpkrohling)

a discussion (no related file):

Previously, annanay25 (Annanay Agarwal) wrote…

Fixed tests. For a default, is it reasonable to assume the requirements of a t3.micro (2 CPUs, 1 GB Memory) instance?

I believe that pods are usually smaller, and our agent container shouldn't consume more than, say, 50% of the pod's resources. So, I would suggest that the limits be 0.5 for the CPU and 128Mi for memory. If we get community feedback that this is too conservative, we can bump it up.


@annanay25
Copy link
Member Author

Updated. @jpkrohling

Copy link
Contributor

@jpkrohling jpkrohling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 2 of 2 files at r5.
Reviewable status: all files reviewed, 5 unresolved discussions (waiting on @annanay25)


pkg/inject/sidecar.go, line 89 at r5 (raw file):

	// Checking annotations for CPU/Memory limits
	limitCPU := "500"

Could you please confirm that 500 is the right number? I had the impression that 0.5 would be the appropriate value.

Copy link
Member Author

@annanay25 annanay25 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: all files reviewed, 5 unresolved discussions (waiting on @annanay25 and @jpkrohling)


pkg/inject/sidecar.go, line 89 at r5 (raw file):

Previously, jpkrohling (Juraci Paixão Kröhling) wrote…

Could you please confirm that 500 is the right number? I had the impression that 0.5 would be the appropriate value.

Right, I think I should use 500 with NewMilliQuantity defined here - https://github.com/jaegertracing/jaeger-operator/blob/master/vendor/k8s.io/apimachinery/pkg/api/resource/quantity.go#L676

(NewQuantity() takes an int64 argument, so can't pass fractions).

Copy link
Member Author

@annanay25 annanay25 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: all files reviewed, 5 unresolved discussions (waiting on @annanay25 and @jpkrohling)


pkg/inject/sidecar.go, line 89 at r5 (raw file):

Previously, annanay25 (Annanay Agarwal) wrote…

Right, I think I should use 500 with NewMilliQuantity defined here - https://github.com/jaegertracing/jaeger-operator/blob/master/vendor/k8s.io/apimachinery/pkg/api/resource/quantity.go#L676

(NewQuantity() takes an int64 argument, so can't pass fractions).

Also, I wasn't convinced with the Memory limits as well, so went ahead and checked to find this - https://github.com/jaegertracing/jaeger-operator/blob/master/vendor/k8s.io/api/core/v1/types.go#L4702

So for 128MiB, we would actually have to put 128 * 1024 * 1024 ?

@jpkrohling
Copy link
Contributor

So for 128MiB, we would actually have to put 128 * 1024 * 1024 ?

Apparently, you could use resource.NewScaledQuantity(128, resource.Mega)

Copy link
Member Author

@annanay25 annanay25 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 3 files reviewed, 6 unresolved discussions (waiting on @annanay25 and @jpkrohling)


pkg/inject/sidecar.go, line 127 at r5 (raw file):

				v1.ResourceLimitsCPU:    *resource.NewQuantity(int64(CPULimit), resource.BinarySI),
				v1.ResourceLimitsMemory: *resource.NewQuantity(int64(MemLimit), resource.DecimalSI),
			},

Removed Requests section.

Copy link
Contributor

@jpkrohling jpkrohling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:, just had a minor comment.

Reviewed 3 of 3 files at r6.
Reviewable status: all files reviewed, 5 unresolved discussions (waiting on @annanay25)


README.adoc, line 330 at r6 (raw file):

    inject-jaeger-agent: "true"
    jaeger-agent-max-cpu: "1000" # in milli-CPU units. 1000 = 1 CPU.
    jaeger-agent-max-memory: "256" # in MiB units. 

Reading this made me realize that it would make sense to support whatever values the Quantity accepts, like cpu: 500m. You can probably use the ParseQuantity function for that:

https://github.com/kubernetes/apimachinery/blob/master/pkg/api/resource/quantity.go#L263


pkg/inject/sidecar.go, line 98 at r6 (raw file):

	}

	CPULimit, _ := strconv.Atoi(limitCPU)

Shouldn't we at least log the error somewhere? Same to the next line.

(might be irrelevant if you do decide to use the ParseQuantity function)

Copy link
Member Author

@annanay25 annanay25 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 1 of 3 files reviewed, 5 unresolved discussions (waiting on @annanay25 and @jpkrohling)


README.adoc, line 330 at r6 (raw file):

Previously, jpkrohling (Juraci Paixão Kröhling) wrote…

Reading this made me realize that it would make sense to support whatever values the Quantity accepts, like cpu: 500m. You can probably use the ParseQuantity function for that:

https://github.com/kubernetes/apimachinery/blob/master/pkg/api/resource/quantity.go#L263

Absolutely. Exactly what we need :)

Addressed comments.

Copy link
Contributor

@jpkrohling jpkrohling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 3 of 3 files at r7.
Dismissed @annanay25 from a discussion.
Reviewable status: all files reviewed, 1 unresolved discussion (waiting on @annanay25)

a discussion (no related file):
Looks good to me, but the tests are failing:

$ make test
Running unit tests...
?   	github.com/jaegertracing/jaeger-operator/cmd	[no test files]
?   	github.com/jaegertracing/jaeger-operator/cmd/manager	[no test files]
ok  	github.com/jaegertracing/jaeger-operator/pkg/account	0.721s	coverage: 100.0% of statements
?   	github.com/jaegertracing/jaeger-operator/pkg/apis	[no test files]
ok  	github.com/jaegertracing/jaeger-operator/pkg/apis/io/v1alpha1	0.758s	coverage: 17.0% of statements
?   	github.com/jaegertracing/jaeger-operator/pkg/cmd/start	[no test files]
?   	github.com/jaegertracing/jaeger-operator/pkg/cmd/version	[no test files]
ok  	github.com/jaegertracing/jaeger-operator/pkg/config/sampling	0.754s	coverage: 94.1% of statements
ok  	github.com/jaegertracing/jaeger-operator/pkg/config/ui	0.755s	coverage: 94.1% of statements
?   	github.com/jaegertracing/jaeger-operator/pkg/controller	[no test files]
?   	github.com/jaegertracing/jaeger-operator/pkg/controller/deployment	[no test files]
ok  	github.com/jaegertracing/jaeger-operator/pkg/controller/jaeger	3.035s	coverage: 64.3% of statements
ok  	github.com/jaegertracing/jaeger-operator/pkg/cronjob	0.021s	coverage: 93.5% of statements
ok  	github.com/jaegertracing/jaeger-operator/pkg/deployment	0.031s	coverage: 100.0% of statements
ok  	github.com/jaegertracing/jaeger-operator/pkg/ingress	0.029s	coverage: 100.0% of statements
ok  	github.com/jaegertracing/jaeger-operator/pkg/inject	0.063s	coverage: 100.0% of statements
ok  	github.com/jaegertracing/jaeger-operator/pkg/route	0.022s	coverage: 100.0% of statements
ok  	github.com/jaegertracing/jaeger-operator/pkg/service	0.051s	coverage: 100.0% of statements
ok  	github.com/jaegertracing/jaeger-operator/pkg/storage	0.007s	coverage: 100.0% of statements
ok  	github.com/jaegertracing/jaeger-operator/pkg/strategy	0.008s	coverage: 100.0% of statements
ok  	github.com/jaegertracing/jaeger-operator/pkg/util	0.006s	coverage: 100.0% of statements
?   	github.com/jaegertracing/jaeger-operator/pkg/version	[no test files]
customresourcedefinition.apiextensions.k8s.io/jaegers.io.jaegertracing created
Formatting code...
Building...
Sending build context to Docker daemon  121.2MB
Step 1/4 : FROM alpine:3.8
 ---> 196d12cf6ab1
Step 2/4 : USER nobody
 ---> Using cache
 ---> 18305c5b670d
Step 3/4 : ADD build/_output/bin/jaeger-operator /usr/local/bin/jaeger-operator
 ---> 854e4212f23a
Step 4/4 : ENTRYPOINT ["/usr/local/bin/jaeger-operator"]
 ---> Running in 5ec7373e38ab
Removing intermediate container 5ec7373e38ab
 ---> 18be60554bb0
Successfully built 18be60554bb0
Successfully tagged jpkroehling/jaeger-operator:latest
Pushing image jpkroehling/jaeger-operator:latest...
Running end-to-end tests...
time="2019-01-28T11:12:30+01:00" level=info msg="passing &{{Jaeger io.jaegertracing/v1alpha1} {agent-as-daemonset  jaeger-jaeger-group-daemonset-1548670340    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] nil [] } {allInOne { {map[]} {[] [] map[] {map[] map[]}}} {0  {map[]} {[] [] map[] {map[] map[]}}} {0  {map[]} {[] [] map[] {map[] map[]}}} {DaemonSet  {map[log-level:debug]} {[] [] map[] {map[] map[]}}} {{[]}} {{[]}} {  {map[]} {<nil>   } {<nil>     false  false false false} {<nil> 0  }} {<nil>  {[] [] map[] {map[] map[]}}} {[] [] map[] {map[] map[]}}} {}}"
time="2019-01-28T11:15:06+01:00" level=info msg="passing &{{Jaeger io.jaegertracing/v1alpha1} {with-cassandra  jaeger-jaeger-group-cassandra-1548670501    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] nil [] } {allInOne { {map[]} {[] [] map[] {map[] map[]}}} {0  {map[]} {[] [] map[] {map[] map[]}}} {0  {map[]} {[] [] map[] {map[] map[]}}} {  {map[]} {[] [] map[] {map[] map[]}}} {{[]}} {{[]}} {cassandra  {map[cassandra.servers:cassandra.default.svc cassandra.keyspace:jaeger_v1_datacenter1]} {<nil>  datacenter1 } {<nil>     false  false false false} {<nil> 0  }} {<nil>  {[] [] map[] {map[] map[]}}} {[] [] map[] {map[] map[]}}} {}}"
Forwarding from 127.0.0.1:16686 -> 16686
Forwarding from [::1]:16686 -> 16686
Forwarding from 127.0.0.1:14268 -> 14268
Forwarding from [::1]:14268 -> 14268

E0128 11:16:01.875670   14597 portforward.go:339] error closing listener: close tcp4 127.0.0.1:16686: use of closed network connection
E0128 11:16:01.876319   14597 portforward.go:339] error closing listener: close tcp6 [::1]:16686: use of closed network connection
E0128 11:16:01.877988   14597 portforward.go:339] error closing listener: close tcp4 127.0.0.1:14268: use of closed network connection
E0128 11:16:01.879056   14597 portforward.go:339] error closing listener: close tcp6 [::1]:14268: use of closed network connection
time="2019-01-28T11:18:44+01:00" level=info msg="passing &{{Jaeger io.jaegertracing/v1alpha1} {my-jaeger  jaeger-jaeger-group-my-jaeger-1548670713    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] nil [] } {allInOne { {map[log-level:debug memory.max-traces:10000]} {[] [] map[] {map[] map[]}}} {0  {map[]} {[] [] map[] {map[] map[]}}} {0  {map[]} {[] [] map[] {map[] map[]}}} {  {map[]} {[] [] map[] {map[] map[]}}} {{[]}} {{[]}} {  {map[]} {<nil>   } {<nil>     false  false false false} {<nil> 0  }} {<nil>  {[] [] map[] {map[] map[]}}} {[] [] map[] {map[] map[]}}} {}}"
time="2019-01-28T11:18:44+01:00" level=info msg="passing &{{Jaeger io.jaegertracing/v1alpha1} {my-jaeger  jaeger-jaeger-group-my-other-jaeger-1548670713    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] nil [] } {allInOne { {map[log-level:debug memory.max-traces:10000]} {[] [] map[] {map[] map[]}}} {0  {map[]} {[] [] map[] {map[] map[]}}} {0  {map[]} {[] [] map[] {map[] map[]}}} {  {map[]} {[] [] map[] {map[] map[]}}} {{[]}} {{[]}} {  {map[]} {<nil>   } {<nil>     false  false false false} {<nil> 0  }} {<nil>  {[] [] map[] {map[] map[]}}} {[] [] map[] {map[] map[]}}} {}}"
Forwarding from 127.0.0.1:16686 -> 16686
Forwarding from [::1]:16686 -> 16686

Forwarding from 127.0.0.1:14268 -> 14268
Forwarding from [::1]:14268 -> 14268

E0128 11:19:14.510695   14597 portforward.go:339] error closing listener: close tcp4 127.0.0.1:14268: use of closed network connection
E0128 11:19:14.510718   14597 portforward.go:339] error closing listener: close tcp4 127.0.0.1:16686: use of closed network connection
E0128 11:19:14.510721   14597 portforward.go:339] error closing listener: close tcp6 [::1]:14268: use of closed network connection
E0128 11:19:14.511818   14597 portforward.go:339] error closing listener: close tcp6 [::1]:16686: use of closed network connection
--- FAIL: TestJaeger (414.32s)
    --- FAIL: TestJaeger/jaeger-group (373.22s)
        --- FAIL: TestJaeger/jaeger-group/sidecar (100.38s)
            client.go:57: resource type RoleBinding with namespace/name (jaeger-jaeger-group-sidecar-1548670401/jaeger-operator) created
            client.go:57: resource type Role with namespace/name (jaeger-jaeger-group-sidecar-1548670401/jaeger-operator) created
            client.go:57: resource type ServiceAccount with namespace/name (jaeger-jaeger-group-sidecar-1548670401/jaeger-operator) created
            client.go:57: resource type Deployment with namespace/name (jaeger-jaeger-group-sidecar-1548670401/jaeger-operator) created
            jaeger_test.go:60: Initialized cluster resources. Namespace: jaeger-jaeger-group-sidecar-1548670401
            wait_util.go:45: Waiting for full availability of jaeger-operator deployment (0/1)
            wait_util.go:51: Deployment available (1/1)
            client.go:57: resource type Jaeger with namespace/name (jaeger-jaeger-group-sidecar-1548670401/agent-as-sidecar) created
            client.go:57: resource type Deployment with namespace/name (jaeger-jaeger-group-sidecar-1548670401/vertx-create-span-sidecar) created
            wait_util.go:45: Waiting for full availability of vertx-create-span-sidecar deployment (0/1)
            wait_util.go:51: Deployment available (1/1)
            wait_util.go:81: Waiting for full availability of the ingress agent-as-sidecar-query
            wait_util.go:81: Waiting for full availability of the ingress agent-as-sidecar-query
            wait_util.go:81: Waiting for full availability of the ingress agent-as-sidecar-query
            wait_util.go:87: Ingress available
            sidecar.go:30: timed out waiting for the condition
            client.go:75: resource type Deployment with namespace/name (jaeger-jaeger-group-sidecar-1548670401/vertx-create-span-sidecar) successfully deleted
            client.go:75: resource type Jaeger with namespace/name (jaeger-jaeger-group-sidecar-1548670401/agent-as-sidecar) successfully deleted
            client.go:75: resource type Deployment with namespace/name (jaeger-jaeger-group-sidecar-1548670401/jaeger-operator) successfully deleted
            client.go:75: resource type ServiceAccount with namespace/name (jaeger-jaeger-group-sidecar-1548670401/jaeger-operator) successfully deleted
            client.go:75: resource type Role with namespace/name (jaeger-jaeger-group-sidecar-1548670401/jaeger-operator) successfully deleted
            client.go:75: resource type RoleBinding with namespace/name (jaeger-jaeger-group-sidecar-1548670401/jaeger-operator) successfully deleted
        --- FAIL: TestJaeger/jaeger-group/my-other-jaeger (31.36s)
            client.go:57: resource type RoleBinding with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/jaeger-operator) created
            client.go:57: resource type Role with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/jaeger-operator) created
            client.go:57: resource type ServiceAccount with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/jaeger-operator) created
            client.go:57: resource type Deployment with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/jaeger-operator) created
            jaeger_test.go:60: Initialized cluster resources. Namespace: jaeger-jaeger-group-my-other-jaeger-1548670713
            wait_util.go:45: Waiting for full availability of jaeger-operator deployment (0/1)
            wait_util.go:51: Deployment available (1/1)
            client.go:57: resource type Jaeger with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/my-jaeger) created
            wait_util.go:45: Waiting for full availability of my-jaeger deployment (0/1)
            wait_util.go:51: Deployment available (1/1)
            client.go:57: resource type Jaeger with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/all-in-one-with-ui-config) created
            wait_util.go:87: Ingress available
            all_in_one_test.go:33: unexpected status code 503
            client.go:75: resource type Jaeger with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/all-in-one-with-ui-config) successfully deleted
            client.go:75: resource type Jaeger with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/my-jaeger) successfully deleted
            client.go:75: resource type Deployment with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/jaeger-operator) successfully deleted
            client.go:75: resource type ServiceAccount with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/jaeger-operator) successfully deleted
            client.go:75: resource type Role with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/jaeger-operator) successfully deleted
            client.go:75: resource type RoleBinding with namespace/name (jaeger-jaeger-group-my-other-jaeger-1548670713/jaeger-operator) successfully deleted
        --- FAIL: TestJaeger/jaeger-group/my-jaeger (31.39s)
            client.go:57: resource type RoleBinding with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/jaeger-operator) created
            client.go:57: resource type Role with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/jaeger-operator) created
            client.go:57: resource type ServiceAccount with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/jaeger-operator) created
            client.go:57: resource type Deployment with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/jaeger-operator) created
            jaeger_test.go:60: Initialized cluster resources. Namespace: jaeger-jaeger-group-my-jaeger-1548670713
            wait_util.go:45: Waiting for full availability of jaeger-operator deployment (0/1)
            wait_util.go:51: Deployment available (1/1)
            client.go:57: resource type Jaeger with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/my-jaeger) created
            wait_util.go:45: Waiting for full availability of my-jaeger deployment (0/1)
            wait_util.go:51: Deployment available (1/1)
            client.go:57: resource type Jaeger with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/all-in-one-with-ui-config) created
            wait_util.go:87: Ingress available
            all_in_one_test.go:33: unexpected status code 503
            client.go:75: resource type Jaeger with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/all-in-one-with-ui-config) successfully deleted
            client.go:75: resource type Jaeger with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/my-jaeger) successfully deleted
            client.go:75: resource type Deployment with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/jaeger-operator) successfully deleted
            client.go:75: resource type ServiceAccount with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/jaeger-operator) successfully deleted
            client.go:75: resource type Role with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/jaeger-operator) successfully deleted
            client.go:75: resource type RoleBinding with namespace/name (jaeger-jaeger-group-my-jaeger-1548670713/jaeger-operator) successfully deleted
FAIL
FAIL	github.com/jaegertracing/jaeger-operator/test/e2e	414.515s
make: *** [Makefile:75: e2e-tests] Error 1

Once they pass, this can be merged.


@jpkrohling
Copy link
Contributor

@annanay25, are you still working on this one?

@annanay25
Copy link
Member Author

Facing minikube setup issues, could you tell me the current test failure output? I'll work backwards from there.

@jpkrohling
Copy link
Contributor

It's currently failing on the sidecar test, timing out because the last condition never becomes true (len(resp.Data) > 0).

@jpkrohling
Copy link
Contributor

@annanay25 Any news on this one?

Annanay added 8 commits March 23, 2019 23:51
Signed-off-by: Annanay <[email protected]>
Signed-off-by: Annanay <[email protected]>
Signed-off-by: Annanay <[email protected]>
Signed-off-by: Annanay <[email protected]>
@annanay25 annanay25 force-pushed the resource-defs-for-sidecar branch from aa2da10 to e2c066f Compare March 23, 2019 18:26
Annanay added 3 commits March 23, 2019 23:59
Signed-off-by: Annanay <[email protected]>
Signed-off-by: Annanay <[email protected]>
@jpkrohling
Copy link
Contributor

Debugging this test further revealed this from the operator's logs:

time="2019-03-25T14:09:08Z" level=info msg="Injecting Jaeger Agent sidecar" deployment=vertx-create-span-sidecar jaeger=agent-as-sidecar jaeger-namespace=smoke-smoke-sidecar-1553522890 namespace=smoke-smoke-sidecar-1553522890
time="2019-03-25T14:09:08Z" level=error msg="failed to update" deployment="&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:vertx-create-span-sidecar,GenerateName:,Namespace:smoke-smoke-sidecar-1553522890,SelfLink:/apis/apps/v1/namespaces/smoke-smoke-sidecar-1553522890/deployments/vertx-create-span-sidecar,UID:71ccb8b2-4f07-11e9-9b3a-2cbdecb746c7,ResourceVersion:9908,Generation:1,CreationTimestamp:2019-03-25 14:08:20 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,sidecar.jaegertracing.io/inject: agent-as-sidecar,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{app: vertx-create-span-sidecar,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: vertx-create-span-sidecar,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{vertx-create-span-sidecar jaegertracing/vertx-create-span:operator-e2e-tests [] []  [{ 0 8080 TCP }] [] [{JAEGER_SERVICE_NAME vertx-create-span-sidecar.smoke-smoke-sidecar-1553522890 nil} {JAEGER_PROPAGATION jaeger,b3 nil}] {map[] map[]} [] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:1,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:1,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent nil false false false} {jaeger-agent jaegertracing/jaeger-agent:1.11 [] [--log-level=debug --reporter.grpc.host-port=dns:///agent-as-sidecar-collector-headless.smoke-smoke-sidecar-1553522890:14250 --reporter.type=grpc]  [{zk-compact-trft 0 5775  } {config-rest 0 5778  } {jg-compact-trft 0 6831  } {jg-binary-trft 0 6832  }] [] [] {map[limits.cpu:{{500 -3} {<nil>} 500m DecimalSI} limits.memory:{{134217728 0} {<nil>}  BinarySI}] map[requests.cpu:{{500 -3} {<nil>} 500m DecimalSI} requests.memory:{{134217728 0} {<nil>}  BinarySI}]} [] [] nil nil nil    nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%,MaxSurge:25%,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-03-25 14:08:24 +0000 UTC 2019-03-25 14:08:24 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-03-25 14:08:24 +0000 UTC 2019-03-25 14:08:20 +0000 UTC NewReplicaSetAvailable ReplicaSet \"vertx-create-span-sidecar-7d7b9f9b79\" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}" error="Deployment.apps \"vertx-create-span-sidecar\" is invalid: [spec.template.spec.containers[1].resources.limits[limits.cpu]: Invalid value: \"limits.cpu\": must be a standard resource for containers, spec.template.spec.containers[1].resources.limits[limits.memory]: Invalid value: \"limits.memory\": must be a standard resource for containers, spec.template.spec.containers[1].resources.requests[requests.cpu]: Invalid value: \"requests.cpu\": must be a standard resource for containers, spec.template.spec.containers[1].resources.requests[requests.memory]: Invalid value: \"requests.memory\": must be a standard resource for containers]"

This seems interesting:

Invalid value: \"limits.cpu\": must be a standard resource for containers, spec.template.spec.containers[1].resources.limits[limits.memory]: Invalid value: \"limits.memory\": must be a standard resource for containers, spec.template.spec.containers[1].resources.requests[requests.cpu]: Invalid value: \"requests.cpu\": must be a standard resource for containers, spec.template.spec.containers[1].resources.requests[requests.memory]: Invalid value: \"requests.memory\": must be a standard resource for containers]"

@pavolloffay
Copy link
Member

@annanay25 any news on this?

@annanay25
Copy link
Member Author

Unfortunately I'm unable to get a working minikube setup to debug this further, I will be closing this PR unless someone else is able to pick it up.

@jpkrohling
Copy link
Contributor

@annanay25, @objectiser, should we close this one, as something similar has been done with #401?

@objectiser
Copy link
Contributor

@jpkrohling yes think so.

@jpkrohling
Copy link
Contributor

@annanay25, I'm closing this one, but feel free to reopen if you want to keep working on it.

@jpkrohling jpkrohling closed this Jul 2, 2019
@annanay25
Copy link
Member Author

Sure, thanks @jpkrohling!

Will pick this up as soon as I get time.

@annanay25 annanay25 changed the title Add resource defs to jaeger agent sidecar Support resource annotations for jaeger agent sidecar Jul 21, 2019
@annanay25
Copy link
Member Author

@jpkrohling I have cleaned up my branch for this feature, but I don't seem to have permissions to reopen this issue. (I think I should open a clean PR with the changes anyway, do let me know what you'd prefer).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cannot use HorizontalPodAutoscaling: injected agent lacks resource defs
5 participants