Skip to content
This repository has been archived by the owner on Jul 14, 2019. It is now read-only.

samsung-cnct/cluster-api-provider-ssh

Repository files navigation

This repository has been deprecated because it is based on a no longer supported pre-v1alpha1 version of the Cluster API. Please see cma-ssh or the Cluster API for alternative implementations.

Kubernetes cluster-api-provider-ssh Project

This repository hosts an implementation of a provider using SSH for the cluster-api project.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

Development notes

Obtaining the code

go get github.com/samsung-cnct/cluster-api-provider-ssh
cd $GOPATH/src/samsung-cnct/cluster-api-provider-ssh

Generating cluster, machine, and provider-components files

Follow the instructions here.

Deploying a cluster

clusterctl needs access to the private key in order to finalize the new internal cluster.

eval $(ssh-agent)
ssh-add <private key file>

Build the clusterctl binary:

 make compile
  • Run using minikube1:

⚠️ Warning: You must only use minikube version 0.28.0

bin/clusterctl create cluster --provider ssh \
    -c ./clusterctl/examples/ssh/out/cluster.yaml \
    -m ./clusterctl/examples/ssh/out/machines.yaml \
    -p ./clusterctl/examples/ssh/out/provider-components.yaml
  • Run using external cluster:
./bin/clusterctl create cluster --provider ssh \
    --existing-bootstrap-cluster-kubeconfig /path/to/kubeconfig \
    -c ./clusterctl/examples/ssh/out/cluster.yaml \
    -m ./clusterctl/examples/ssh/out/machines.yaml \
    -p ./clusterctl/examples/ssh/out/provider-components.yaml

Validate your new cluster:

export KUBECONFIG=${PWD}/kubeconfig
kubectl get nodes

Building and deploying new controller images for development

To test custom changes to either of the machine controller or the cluster controller, you need to build and push new images to a repository. There are make targets to do this.

For example:

  • push both ssh-cluster-controller and ssh-machine-controller images
make dev_push
  • push ssh-machine-controller image
make dev_push_machine
  • push ssh-cluster-controller image
make dev_push_cluster

The images will be tagged with the username of the account you used to build and push the images:

Remember to change the provider-components.yaml manifest to point to your images. For example:

diff --git a/clusterctl/examples/ssh/provider-components.yaml.template b/clusterctl/examples/ssh/provider-components.yaml.template
index 8fac530..3d6c246 100644
--- a/clusterctl/examples/ssh/provider-components.yaml.template
+++ b/clusterctl/examples/ssh/provider-components.yaml.template
@@ -45,7 +45,7 @@ spec:
             cpu: 100m
             memory: 30Mi
       - name: ssh-cluster-controller
-        image: gcr.io/k8s-cluster-api/ssh-cluster-controller:0.0.1
+        image: gcr.io/k8s-cluster-api/ssh-cluster-controller:paul
         volumeMounts:
           - name: config
             mountPath: /etc/kubernetes
@@ -69,7 +69,7 @@ spec:
             cpu: 400m
             memory: 500Mi
       - name: ssh-machine-controller
-        image: gcr.io/k8s-cluster-api/ssh-machine-controller:0.0.1
+        image: gcr.io/k8s-cluster-api/ssh-machine-controller:paul
         volumeMounts:
           - name: config
             mountPath: /etc/kubernetes

1 If using minikube on linux, you may prefer to use the kvm2 driver. To do so, add the --vm-driver=kvm2 flag after installing the driver.