Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supergloo Support #151

Merged
merged 19 commits into from
Apr 25, 2019
Merged

Supergloo Support #151

merged 19 commits into from
Apr 25, 2019

Conversation

yuval-k
Copy link
Contributor

@yuval-k yuval-k commented Apr 17, 2019

Adding supergloo support.

I need to add a few more things here to support the A\B testing case, but wanted to make sure that we are good with the general structure of this PR before I continue investing in it.

One TODO that I have there, is the reference to the target mesh. as supergloo supports more than one mesh, it's CRDs have a mesh reference.
I think this can be passed as command line flags \ environment variables.

note that it seems like a lot of changes, but most of them are vendor \ generated code.

@yuval-k yuval-k requested a review from stefanprodan as a code owner April 17, 2019 17:17
@stefanprodan
Copy link
Member

This looks great thanks @yuval-k

Flagger has a command flag called mesh provider that you could use to set the target mesh, for now it accepts istio and appmesh as values.

It looks like the e2e tests are failing, Flagger keeps crashing at startup, can you please rebase your PR.

@yuval-k
Copy link
Contributor Author

yuval-k commented Apr 17, 2019

where can you see the reason for the failure? I just see

Too long with no output (exceeded 10m0s)

@stefanprodan
Copy link
Member

stefanprodan commented Apr 18, 2019

@yuval-k have you tested this PR inside Kubernetes? Here is the error, looks like klog is not initialised and it tries to write the logs to disk:

W0418 09:17:13.168786       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.

log: exiting because of error: log: cannot create log: open /tmp/flagger.flagger-845645c8fc-vtbpt.unknownuser.log.WARNING.20190418-085541.1: read-only file system

For logging Flagger uses zap and the Istio logger that replaces glog with zap. Kubernetes 1.13 switched to klog and the logging is broken now. Flagger should log to stderr in json format only.

[[override]]
  name = "github.com/golang/glog"
  source = "github.com/istio/glog"

I've tried to override klog with istio/glog but it doesn't work. I would rollback to Kubernetes 1.11 until klong can be replaced with zap.

@yuval-k
Copy link
Contributor Author

yuval-k commented Apr 18, 2019

Thanks for all the info - I'm looking into this on our end and see if we can use lower version of kube.

@stefanprodan
Copy link
Member

@yuval-k I've done the upgrade to 1.13 in master and the override of klog with zap.

Please rebase and take my Gopkg files, also pin the supergloo package in Gopkg.toml to a git tag or commit. Thanks!

@yuval-k
Copy link
Contributor Author

yuval-k commented Apr 18, 2019

update: ignore this comment
hi @stefanprodan I was able to work around the klog issue by changing main.go like so:


type klogZapAdapter struct {
	logger *zap.SugaredLogger
}

func (k *klogZapAdapter) Write(p []byte) (n int, err error) {
	k.logger.Info(string(p))
	return len(p), nil
}

func main() {
	flag.Parse()

	logger, err := logger.NewLoggerWithEncoding(logLevel, zapEncoding)
	if err != nil {
		log.Fatalf("Error creating logger: %v", err)
	}
	if zapReplaceGlobals {
		zap.ReplaceGlobals(logger.Desugar())
	}

	defer logger.Sync()

	klog.SetOutput(&klogZapAdapter{logger})

does that sound reasonable to you?

@yuval-k
Copy link
Contributor Author

yuval-k commented Apr 18, 2019

Please disregard my last comment - I just saw your comment from before saying you solved it

@yuval-k
Copy link
Contributor Author

yuval-k commented Apr 23, 2019

Hi @stefanprodan this should be ready for review

@stefanprodan
Copy link
Member

stefanprodan commented Apr 23, 2019

Hi @yuval-k can you please add some docs on how was this tested and how to setup e2e testing for it. I would like to have e2e tests the same as for Istio and be able to run it on GKE and on Kubernetes KinD. Thanks

@yuval-k
Copy link
Contributor Author

yuval-k commented Apr 23, 2019

I tested it manually before, added e2e tests now. From reason I saw an issue to connecting to promethues when I ran them locally (which is not in the supergloo code path) - so I pushed them to see if it reproduces in CI.

@yuval-k
Copy link
Contributor Author

yuval-k commented Apr 24, 2019

I see the test failing - I think i'm missing something for the A\B test use case - checking.

@stefanprodan
Copy link
Member

I think I know why the tests are failing, the default Istio memory and CPU requests are very high for the CirceCI VM. To make Istio work with KinD you need to set these resources requests https://github.com/weaveworks/flagger/blob/master/test/e2e-istio-values.yaml

I would add a check to e2e-supergloo.sh:

kubectl -n istio-system rollout status deployment/istio-pilot
kubectl -n istio-system rollout status deployment/istio-mixer
kubectl -n istio-system rollout status deployment/istio-sidecar-injector
kubectl -n istio-system rollout status deployment/istio-telemetry
kubectl -n istio-system rollout status deployment/prometheus

Copy link
Member

@stefanprodan stefanprodan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great, I have one suggestion please have a look.

Thanks @yuval-k

.circleci/config.yml Outdated Show resolved Hide resolved
Copy link
Member

@stefanprodan stefanprodan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@stefanprodan stefanprodan merged commit 13816ee into fluxcd:master Apr 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants