Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid metadata in OpenStack instance #9086

Closed
zetaab opened this issue May 7, 2020 · 15 comments · Fixed by #9211
Closed

Invalid metadata in OpenStack instance #9086

zetaab opened this issue May 7, 2020 · 15 comments · Fixed by #9211
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.

Comments

@zetaab
Copy link
Member

zetaab commented May 7, 2020

I am trying 1.18.0 alpha 3 release and creating instances:

Bad request with: [POST https://exxx.13774/v2.1/servers], error message: {"badRequest": {"message": "Invalid input for field/attribute metadata. Value: {u'kopsGroupName': u'master-zone-3-1', u'k8s': u'clusterospr-0502de.k8s.local', u'KopsRole': u'Master', u'k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup': u'master-zone-3-1', u'cluster_generation': u'0', u'k8s.io/role/master': u'1', u'ig_generation': u'0', u'kops.k8s.io/instancegroup': u'master-zone-3-1', u'KubernetesCluster': u'clusterospr-0502de.k8s.local', u'KopsInstanceGroup': u'master-zone-3-1', u'KopsNetwork': u'clusterospr-0502de.k8s.local'}. u'k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup', u'k8s.io/role/master', u'kops.k8s.io/instancegroup' do not match any of the regexes: '^[a-zA-Z0-9-_:. ]{1,255}$'", "code": 400}}

cc @mitch000001 that PR that you made broke this..

and this is now backported to all releases

problem PRs
#8999
#9001
#9013

It is impossible to use any latest kops version with openstack

versions affected:
v1.18.0-alpha.3
v1.17.0-beta.2
v1.16.2

/kind bug
/priority critical-urgent

@zetaab
Copy link
Member Author

zetaab commented May 7, 2020

/kind bug
/priority critical-urgent

@hakman
Copy link
Member

hakman commented May 7, 2020

Really sad to see this...

@zetaab
Copy link
Member Author

zetaab commented May 7, 2020

I am thinking could we have e2e OpenStack tests also, but we need someone to deliver access

@hakman
Copy link
Member

hakman commented May 7, 2020

Someone could also manually run such tests before releases until such cluster would exist.
https://kops.sigs.k8s.io/development/testing/#kubernetes-e2e-testing

@mitch000001
Copy link
Contributor

mitch000001 commented May 7, 2020

I am sorry. Unfortunately it sneaked through. And it is in release 1.17 and 1.16. So, we should create PRs for those releases as well. But @zetaab patched it. Thank you.

@mitch000001
Copy link
Contributor

@hakman @zetaab What do we need in order to add an Openstack env to the e2e tests of kops?

@rifelpet
Copy link
Member

rifelpet commented May 7, 2020

I think its worth following what cluster-api-provider-openstack did here: kubernetes-sigs/cluster-api-provider-openstack#484

They use a project called OpenLab to run their tests: theopenlab/openlab#141

And it looks like those tests are in the k8s testgrid: https://testgrid.k8s.io/sig-cluster-lifecycle-cluster-api-provider-openstack#capo-conformance-stable-k8s-master

If we could setup something similar that would be great. I'm not familiar enough with the OpenStack side of things to take the next steps but I can provide assistance in getting the jobs themselves setup.

@zetaab
Copy link
Member Author

zetaab commented May 7, 2020

we do actually have theopenlab/openlab#181 openstack project in some openstack installation. However, the problem is that the openstack version is pretty old in that one. We cannot really test loadbalancers etc there

@rifelpet
Copy link
Member

rifelpet commented May 7, 2020

Hm do we have any tests setup there at all? I wonder if "something is better than nothing" at this point. If we can get any sort of periodic testing on there and connected to our test grid, it could at least serve as some basic smoke testing until they eventually upgrade their openstack version and add functionality. Would a test there have caught an issue like this?

@zetaab
Copy link
Member Author

zetaab commented May 8, 2020

we do not have anything currently there. I am not sure how much the quota can handle there, maybe not much. So I do not know can that really be used for PR tests

@rifelpet
Copy link
Member

rifelpet commented May 8, 2020

yea, perhaps not PR tests but a single job that runs periodically? we could make sure even the most basic smoke test is green before releasing new kops versions. It'd also help us trace issues back to specific PRs.

@johanssone
Copy link

Just a comment here for people that are running into this, but wants to try out 1.18.0-alpha for Openstack deployments until there is a new release with fix, the below steps worked for me (golang required ofc):

mkdir ~/kops
cd ~/kops
export GOPATH=`pwd`

go get -d k8s.io/kops
cd ${GOPATH}/src/k8s.io/kops/
git checkout master # (commit b80edfdd2526164de9b0cd3e6cffac8fa8c7cca1 as of writing)
make

kops binary can then be found under ${GOPATH}/src/k8s.io/kops/.build/local/kops

cd ${GOPATH}/src/k8s.io/kops/.build/local
chmod +x kops
./kops version
Version 1.18.0-alpha.3 (git-b80edfdd2)

That binary works out OK for deploying clusters on OpenStack it seems. Just keep in mind that it's a build directly from master development branch.

@hakman
Copy link
Member

hakman commented May 12, 2020

Even easier, the build including just this fix from the CI tests:
https://storage.googleapis.com/kops-ci/bin/1.18.0-alpha.4+3df14f2be/linux/amd64/kops

@johanssone
Copy link

Even easier, the build including just this from the CI tests:
https://storage.googleapis.com/kops-ci/bin/1.18.0-alpha.4+3df14f2be/linux/amd64/kops

Or that :D

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants