Skip to content
This repository has been archived by the owner on Dec 18, 2020. It is now read-only.

Commit

Permalink
Added vSphere volumes to protokube, updated vSphere testing doc and m…
Browse files Browse the repository at this point in the history
…akefile. (#1)

* Add vSphere volumes to protokube. Update vSphere testing doc and makefile.

* Updated vsphere_volume to get correct IP. Addressed comments.
  • Loading branch information
prashima authored and Mark Sterin committed Apr 11, 2017
1 parent 270bb10 commit 659eb42
Show file tree
Hide file tree
Showing 5 changed files with 267 additions and 0 deletions.
5 changes: 5 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,11 @@ push-gce-run: push
push-aws-run: push
ssh -t ${TARGET} sudo SKIP_PACKAGE_UPDATE=1 /tmp/nodeup --conf=/var/cache/kubernetes-install/kube_env.yaml --v=8

# Please read docs/development/vsphere-dev.md before trying this out.
# Build protokube and nodeup. Pushes protokube to DOCKER_REGISTRY and scp nodeup binary to the TARGET. Uncomment ssh part to run nodeup at set TARGET.
push-vsphere: nodeup protokube-push
scp -C .build/dist/nodeup ${TARGET}:~/
# ssh -t [email protected] 'export AWS_ACCESS_KEY_ID="AKIAJ6UOKB6BGYLIQJEQ"; export AWS_SECRET_ACCESS_KEY="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"; export AWS_REGION="us-west-2"; SKIP_PACKAGE_UPDATE=1 ~/nodeup --conf=/var/cache/kubernetes-install/kube_env.yaml --v=8'

protokube-gocode:
go install k8s.io/kops/protokube/cmd/protokube
Expand Down
86 changes: 86 additions & 0 deletions docs/development/vsphere-dev.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
# Development process and hacks for vSphere

This document contains few details, guidelines and tips about ongoing effort for vSphere support for kops.

## Contact
We are using [#sig-onprem channel](https://kubernetes.slack.com/messages/sig-onprem/) for discussing vSphere support for kops. Please feel free to join and talk to us.

## Process
Here is a [list of requirements and tasks](https://docs.google.com/document/d/10L7I98GuW7o7QuX_1QTouxC0t0aEO_68uHKNc7o4fXY/edit#heading=h.6wyer21z75n9 "Kops-vSphere specification") that we are working on. Once the basic infrastructure for vSphere is ready, we will move these tasks to issues.

## Hacks

### Nodeup and protokube testing
This Section talks about testing nodeup and protokube changes on a standalone VM, running on standalone esx or vSphere.

#### Pre-requisites
Following manual steps are pre-requisites for this testing, until vSphere support for kops starts to create this infrastructure.

+ Setup password free ssh to the VM
```bash
cat ~/.ssh/id_rsa.pub | ssh <username>@<vm_ip> 'cat >> .ssh/authorized_keys'
```
+ Nodeup configuration file needs to be present on the VM. It can be copied from an existing AWS created master (or worker, whichever you are testing), from this location /var/cache/kubernetes-install/kube_env.yaml on your existing cluster node. Sample nodeup cofiguation file-
```yaml
Assets:
- 5e486d4a2700a3a61c4edfd97fb088984a7f734f@https://storage.googleapis.com/kubernetes-release/release/v1.5.2/bin/linux/amd64/kubelet
- 10e675883b167140f78ddf7ed92f936dca291647@https://storage.googleapis.com/kubernetes-release/release/v1.5.2/bin/linux/amd64/kubectl
- 19d49f7b2b99cd2493d5ae0ace896c64e289ccbb@https://storage.googleapis.com/kubernetes-release/network-plugins/cni-07a8a28637e97b22eb8dfe710eeae1344f69d16e.tar.gz
ClusterName: cluster3.mangoreviews.com
ConfigBase: s3://your-objectstore/cluster1.yourdomain.com
InstanceGroupName: master-us-west-2a
Tags:
- _automatic_upgrades
- _aws
- _cni_bridge
- _cni_host_local
- _cni_loopback
- _cni_ptp
- _kubernetes_master
- _kubernetes_pool
- _protokube
channels:
- s3://your-objectstore/cluster1.yourdomain.com/addons/bootstrap-channel.yaml
protokubeImage:
hash: 6805cba0ea13805b2fa439914679a083be7ac959
name: protokube:1.5.1
source: https://kubeupv2.s3.amazonaws.com/kops/1.5.1/images/protokube.tar.gz

```
+ Currently vSphere code is using AWS S3 for storing all configurations, spec, etc. You need valid AWS credentials.
+ s3://your-objectstore/cluster1.yourdomain.com folder should have all necessary configuration, spec, addons, etc. (If you don't know how to get this, then read more on kops and how to deploy a cluster using kops)

#### Testing your changes
Once you are done making your changes in nodeup and protokube code, you would want to test them on a VM. In order to do so you will need to build nodeup binary and copy it on the desired VM. You would also want to modify nodeup code so that it accesses protokube container image that contains your changes. All this can be done by setting few environment variables, minor code updates and running 'make push-vsphere'.

+ Create or use existing docker hub registry to create 'protokube' repo for your custom image. Update the registry details in Makefile, by modifying DOCKER_REGISTRY variable. Don't forget to do 'docker login' with your registry credentials once.
+ Export TARGET environment variable, setting its value to username@vm_ip of your test VM.
+ Update $KOPS_DIR/upup/models/nodeup/_protokube/services/protokube.service.template-
```
ExecStart=/usr/bin/docker run -v /:/rootfs/ -v /var/run/dbus:/var/run/dbus -v /run/systemd:/run/systemd --net=host --privileged -e AWS_ACCESS_KEY_ID='something' -e AWS_SECRET_ACCESS_KEY='something' <your-registry>/protokube:<image-tag> /usr/bin/protokube "$DAEMON_ARGS"
```
+ Run 'make push-vsphere'. This will build nodeup binary, scp it to your test VM, build protokube image and upload it to your registry.
+ SSH to your test VM and set following environment variables-
```bash
export AWS_REGION=us-west-2
export AWS_ACCESS_KEY_ID=something
export AWS_SECRET_ACCESS_KEY=something
```
+ Run './nodeup --conf kube_env.yaml' to test your custom build nodeup and protokube.

**Tip:** Consider adding following code to $KOPS_DIR/upup/pkg/fi/nodeup/nodetasks/load_image.go to avoid downloading protokube image. Your custom image will be downloaded directly when systemd will run protokube.service (because of the changes we made in protokube.service.template).
```go
// Add this after url variable has been populated.
if strings.Contains(url, "protokube") {
fmt.Println("Skipping protokube image download and loading.")
return nil
}
```


**Note:** Same testing can also be done using alternate steps (these steps are _not working_ currently due to hash match failure):
+ Run 'make protokube-export' and 'make nodeup' to build and export protokube image as tar.gz, and to build nodeup binary. Both located in $KOPS_DIR/.build/dist/images/protokube.tar.gz and $KOPS_DIR/.build/dist/nodeup, respectively.
+ Copy nodeup binary to the test VM.
+ Upload $KOPS_DIR/.build/dist/images/protokube.tar.gz and $KOPS_DIR/.build/dist/images/protokube.tar.gz.sha1, with appropriate permissions, to a location from where it can be accessed from the test VM. Eg: your development machine's public_html, if working on linux based machine.
+ Update hash value to protokube.tar.gz.sha1's value and source to the uploaded location, in kube_env.yaml (see pre-requisite steps).
+ SSH to your test VM, set necessary environment variables and run './nodeup --conf kube_env.yaml'.
4 changes: 4 additions & 0 deletions nodeup/pkg/model/protokube.go
Original file line number Diff line number Diff line change
Expand Up @@ -160,6 +160,9 @@ type ProtokubeFlags struct {
Cloud *string `json:"cloud,omitempty" flag:"cloud"`

ApplyTaints *bool `json:"applyTaints,omitempty" flag:"apply-taints"`

// ClusterId flag is required only for vSphere cloud type, to pass cluster id information to protokube. AWS and GCE workflows ignore this flag.
ClusterId *string `json:"cluster-id,omitempty" flag:"cluster-id"`
}

// ProtokubeFlags returns the flags object for protokube
Expand Down Expand Up @@ -207,6 +210,7 @@ func (t *ProtokubeBuilder) ProtokubeFlags(k8sVersion semver.Version) *ProtokubeF
f.DNSProvider = fi.String("google-clouddns")
case fi.CloudProviderVSphere:
f.DNSProvider = fi.String("aws-route53")
f.ClusterId = fi.String(t.Cluster.ObjectMeta.Name)
default:
glog.Warningf("Unknown cloudprovider %q; won't set DNS provider")
}
Expand Down
12 changes: 12 additions & 0 deletions protokube/cmd/protokube/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,18 @@ func run() error {
if internalIP == nil {
internalIP = gceVolumes.InternalIP()
}
} else if cloud == "vsphere" {
glog.Info("Initializing vSphere volumes")
vsphereVolumes, err := protokube.NewVSphereVolumes()
if err != nil {
glog.Errorf("Error initializing vSphere: %q", err)
os.Exit(1)
}
volumes = vsphereVolumes
if internalIP == nil {
internalIP = vsphereVolumes.InternalIp()
}

} else {
glog.Errorf("Unknown cloud %q", cloud)
os.Exit(1)
Expand Down
160 changes: 160 additions & 0 deletions protokube/pkg/protokube/vsphere_volume.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package protokube

import (
"errors"
"github.com/golang/glog"
"net"
)

const EtcdDataKey = "01"
const EtcdDataVolPath = "/mnt/master-" + EtcdDataKey
const EtcdEventKey = "02"
const EtcdEventVolPath = "/mnt/master-" + EtcdEventKey

// TODO Use lsblk or counterpart command to find the actual device details.
const LocalDeviceForDataVol = "/dev/sdb1"
const LocalDeviceForEventsVol = "/dev/sdc1"
const VolStatusValue = "attached"
const EtcdNodeName = "a"
const EtcdClusterName = "main"
const EtcdEventsClusterName = "events"

type VSphereVolumes struct {
// Dummy property. Not getting used any where for now.
paths map[string]string
}

var _ Volumes = &VSphereVolumes{}
var machineIp net.IP

func NewVSphereVolumes() (*VSphereVolumes, error) {
vsphereVolumes := &VSphereVolumes{
paths: make(map[string]string),
}
vsphereVolumes.paths[EtcdDataKey] = EtcdDataVolPath
vsphereVolumes.paths[EtcdEventKey] = EtcdEventVolPath
return vsphereVolumes, nil
}

func (v *VSphereVolumes) FindVolumes() ([]*Volume, error) {
var volumes []*Volume
ip := v.InternalIp()
attachedTo := ""
if ip != nil {
attachedTo = ip.String()
}

// etcd data volume and etcd cluster spec.
{
vol := &Volume{
ID: EtcdDataKey,
LocalDevice: LocalDeviceForDataVol,
AttachedTo: attachedTo,
Mountpoint: EtcdDataVolPath,
Status: VolStatusValue,
Info: VolumeInfo{
Description: EtcdClusterName,
},
}
etcdSpec := &EtcdClusterSpec{
ClusterKey: EtcdClusterName,
NodeName: EtcdNodeName,
NodeNames: []string{EtcdNodeName},
}
vol.Info.EtcdClusters = []*EtcdClusterSpec{etcdSpec}
volumes = append(volumes, vol)
}

// etcd events volume and etcd events cluster spec.
{
vol := &Volume{
ID: EtcdEventKey,
LocalDevice: LocalDeviceForEventsVol,
AttachedTo: attachedTo,
Mountpoint: EtcdEventVolPath,
Status: VolStatusValue,
Info: VolumeInfo{
Description: EtcdEventsClusterName,
},
}
etcdSpec := &EtcdClusterSpec{
ClusterKey: EtcdEventsClusterName,
NodeName: EtcdNodeName,
NodeNames: []string{EtcdNodeName},
}
vol.Info.EtcdClusters = []*EtcdClusterSpec{etcdSpec}
volumes = append(volumes, vol)
}
glog.Infof("Found volumes: %v", volumes)
return volumes, nil
}

func (v *VSphereVolumes) AttachVolume(volume *Volume) error {
// Currently this is a no-op for vSphere. The virtual disks should already be mounted on this VM.
glog.Infof("All volumes should already be attached. No operation done.")
return nil
}

func (v *VSphereVolumes) InternalIp() net.IP {
if machineIp == nil {
ip, err := getMachineIp()
if err != nil {
return ip
}
machineIp = ip
}
return machineIp
}

func getMachineIp() (net.IP, error) {
ifaces, err := net.Interfaces()
if err != nil {
return nil, err
}
for _, iface := range ifaces {
if iface.Flags&net.FlagUp == 0 {
continue // interface down
}
if iface.Flags&net.FlagLoopback != 0 {
continue // loopback interface
}
addrs, err := iface.Addrs()
if err != nil {
return nil, err
}
for _, addr := range addrs {
var ip net.IP
switch v := addr.(type) {
case *net.IPNet:
ip = v.IP
case *net.IPAddr:
ip = v.IP
}
if ip == nil || ip.IsLoopback() {
continue
}
ip = ip.To4()
if ip == nil {
continue // not an ipv4 address
}
return ip, nil
}
}
return nil, errors.New("No IP found.")
}

0 comments on commit 659eb42

Please sign in to comment.