We use git submodule
instead common practice of vendoring because it has one big advantage
we can use git merge
for update code base for support new version of kubernetes.
We are trying to use only additional code instead replace and you always can check
'what is the last merged version' and how we connect to cloudify.
So theoretically you can build kubernetes binaries from repository, but we have no
guarantees for such usage. And when we will have ability to attach our code as plugin
to kubernetes product we will drop all kubernetes forks and use only official repositories
(near 1.9+?)
git clone --recursive [email protected]:cloudify-incubator/cloudify-kubernetes-provider.git
# show state for submodules
git config status.submodulesummary 1
sudo apt-get install golang-go
export GOBIN=`pwd`/bin
export PATH=$PATH:`pwd`/bin
export GOPATH=`pwd`
git submodule update
make all
make reformat
After update to new version of kubernates run:
rm -rfv src/k8s.io/kubernetes/vendor/github.com/golang/glog
rm -rfv src/k8s.io/kubernetes/vendor/github.com/google/gofuzz
rm -rfv src/k8s.io/kubernetes/vendor/github.com/davecgh/go-spew
rm -rfv src/k8s.io/kubernetes/vendor/github.com/json-iterator/go
rm -rfv src/k8s.io/kubernetes/vendor/github.com/pborman/uuid
rm -rfv src/k8s.io/kubernetes/vendor/github.com/docker/spdystream
rm -rfv src/k8s.io/kubernetes/vendor/github.com/golang/protobuf
After update to new version of autoscaler run:
rm -rfv src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/golang/glog
rm -rfv src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/google/gofuzz
rm -rfv src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/davecgh/go-spew
rm -rfv src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/json-iterator/go
rm -rfv src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/pborman/uuid
rm -rfv src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/docker/spdystream
rm -rfv src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/golang/protobuf
and cleanup Godeps/Godeps.json.
# cfy-kubernetes version
cfy-kubernetes -version
cfy-kubernetes --kubeconfig $HOME/.kube/config --cloud-config examples/config.json
kubectl get nodes
# autoscale
src/k8s.io/autoscaler/cluster-autoscaler/cluster-autoscaler --kubeconfig $HOME/.kube/config --cloud-provider cloudify --cloud-config examples/config.json
# scale
cfy executions start scale -d k8s -p 'scalable_entity_name=k8s_node_group'
# downscale
cfy executions start scale -d k8s -p 'scalable_entity_name=k8s_node_group' -p 'delta=-1'
# create simple pod https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/
kubectl create -f https://k8s.io/docs/tasks/run-application/deployment.yaml --kubeconfig $HOME/.kube/config
# look to description
kubectl describe deployment nginx-deployment --kubeconfig $HOME/.kube/config
# delete
kubectl delete deployment nginx-deployment --kubeconfig $HOME/.kube/config
# check volume
wget https://raw.githubusercontent.com/cloudify-incubator/cloudify-kubernetes-provider/master/examples/nginx.yaml
kubectl create -f nginx.yaml
watch -n 5 -d kubectl describe pod nginx
kubectl delete pod nginx
# check scale
kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=500m,memory=500M --expose --port=80
kubectl autoscale deployment php-apache --cpu-percent=90 --min=10 --max=20
watch -n 10 -d "kubectl get hpa; kubectl get pods; kubectl get nodes"
# stop scale
kubectl delete hpa php-apache
kubectl delete deployment php-apache
For cfy-go
documentation look to godoc.
For additional cluster-autoscaler
documentation look to official repository.
For full documentation about inputs look to official simple cluster blueprint or copy distributed with repository.
CLOUDPROVIDER
can be aws
or vsphere
.
# set empty secrets
cfy secret create kubernetes_certificate_authority_data -s "#"
cfy secret create kubernetes-admin_client_key_data -s "#"
cfy secret create kubernetes_master_port -s "#"
cfy secret create kubernetes-admin_client_certificate_data -s "#"
cfy secret create kubernetes_master_ip -s "#"
# upload
git clone https://github.com/cloudify-incubator/cloudify-kubernetes-provider.git -b master --depth 1
cd cloudify-kubernetes-provider
CLOUDPROVIDER=aws make upload
#delete
cfy uninstall k8s -p ignore_failure=true --allow-custom-parameters
Known issues:
- Q: Many messages like 'Not found instances: Wrong content type: text/html' in logs on kubenetes manager host or 'kube-dns not Running' in cloudify logs.
- A: Check in /root/cfy.json cloudify manager ip and port.