Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set shards and replicas based on redundancy policy #229

Merged

Conversation

pavolloffay
Copy link
Member

@jpkrohling
Copy link
Contributor

This change is Reviewable

@codecov
Copy link

codecov bot commented Feb 25, 2019

Codecov Report

Merging #229 into master will increase coverage by 0.12%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #229      +/-   ##
==========================================
+ Coverage   90.35%   90.48%   +0.12%     
==========================================
  Files          59       59              
  Lines        2645     2680      +35     
==========================================
+ Hits         2390     2425      +35     
  Misses        164      164              
  Partials       91       91
Impacted Files Coverage Δ
pkg/storage/elasticsearch.go 78.43% <100%> (+11.26%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0337cc9...98fa19d. Read the comment docs.

Copy link
Contributor

@jpkrohling jpkrohling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 2 of 2 files at r1.
Reviewable status: all files reviewed, 1 unresolved discussion (waiting on @pavolloffay)


pkg/storage/elasticsearch.go, line 55 at r1 (raw file):

			"--es.token-file="+k8sTokenFile,
			"--es.tls.ca="+caPath)
		if !containsPrefix("--es.num-shards", p.Containers[0].Args) {

Could you add a test ensuring that the user's value always has precedence?

@pavolloffay
Copy link
Member Author

I have added more tests and slightly changed ES CR. If node count is higher than 3 (N) it creates CR with 3 master nodes and N-3 client,data nodes.

It's not recommended to deploy more than 3 master nodes in general and ESO even does not allow it:

time="2019-02-25T15:55:43Z" level=error msg="error syncing key (myproject/elasticsearch): Failed to reconcile Elasticsearch deployment spec: Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles"

@pavolloffay
Copy link
Member Author

At the moment there is a bug in ES operator, it does not create deployment correctly for 3 master and N other nodes https://jira.coreos.com/browse/LOG-340

@pavolloffay
Copy link
Member Author

@jpkrohling could you please re-review and/or merge?

Copy link
Contributor

@jpkrohling jpkrohling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm: , just fix the one typo and it's good to be merged. Bonus points if you could add the link to the bug preventing more than 3 masters.

Reviewed 2 of 2 files at r2.
Reviewable status: all files reviewed, 2 unresolved discussions (waiting on @pavolloffay)


pkg/storage/elasticsearch.go, line 133 at r2 (raw file):

func getNodes(es v1alpha1.ElasticsearchSpec) []esv1alpha1.ElasticsearchNode {
	if es.NodeCount <= 3 {

Could you add a reference to the bug? Is there a public place where the bug can be seen? I tried opening the JIRA, but I don't seem to have access to it O_o


pkg/storage/elasticsearch_test.go, line 39 at r2 (raw file):

				RedundancyPolicy: esv1alpha1.FullRedundancy,
				Storage: esv1alpha1.ElasticsearchStorageSpec{
					StorageClassName: "floppydisk",

How do you even know this exists :D


pkg/storage/elasticsearch_test.go, line 99 at r2 (raw file):

	tests := []struct {
		pod      *v1.PodSpec
		extected *v1.PodSpec

s/extected/expected ?

@pavolloffay
Copy link
Member Author

How do you even know this exists :D

I heard old dudes talking about it... I thought that I will give it a try on k8s!

Could you add a reference to the bug? Is there a public place where the bug can be seen? I tried opening the JIRA, but I don't seem to have access to it O_o

It's private jira, once it's fixed we don't have to change anything as it is the correct topology. I have just added a comment here to let people know.

@pavolloffay pavolloffay force-pushed the set-shards-and-replicas-in-es branch from c857013 to 98fa19d Compare February 26, 2019 16:27
@pavolloffay pavolloffay merged commit ed9f1b2 into jaegertracing:master Feb 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants