diff --git a/.docs/user-guide/config.md b/.docs/user-guide/config.md index 39b48f41..8a175f54 100644 --- a/.docs/user-guide/config.md +++ b/.docs/user-guide/config.md @@ -589,6 +589,7 @@ remote storage systems. Currently the following storage drivers are supported: [EBS](./storage-providers.md#aws-ebs) | ebs, ec2 [EFS](./storage-providers.md#aws-efs) | efs [RBD](./storage-providers.md#ceph-rbd) | rbd +[GCEPD](./storage-providers.md#gcepd) | gcepd ..more coming| The `libstorage.server.libstorage.storage.driver` property can be used to diff --git a/.docs/user-guide/storage-providers.md b/.docs/user-guide/storage-providers.md index bfb56215..6bd2d8aa 100644 --- a/.docs/user-guide/storage-providers.md +++ b/.docs/user-guide/storage-providers.md @@ -707,3 +707,107 @@ libstorage: in place, it may be possible for a client to attach a volume that is already attached to another node. Mounting and writing to such a volume could lead to data corruption. + +## GCEPD + +The Google Compute Engine Persistent Disk (GCEPD) driver registers a driver +named `gcepd` with the `libStorage` driver manager and is used to connect and +mount Google Compute Engine (GCE) persistent disks with GCE machine instances. + + +### Requirements + +* GCE account +* Service account credentials in JSON for GCE project. If not using the Compute + Engine default service account, create a new service account with the Service + Account Actor role, and create/download a new private key in JSON format. see + [creating a service account](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount) + for details. + +### Configuration + +The following is an example with all possible fields configured. For a running +example see the `Examples` section. + +```yaml +gcepd: + keyfile: /etc/gcekey.json + zone: us-west1-b + defaultDiskType: pd-ssd + tag: rexray +``` + +#### Configuration Notes + +* The `keyfile` parameter is required. It specifies a path on disk to a file + containing the JSON-encoded service account credentials. This file can be + downloaded from the GCE web portal. +* The `zone` parameter is optional, and configures the driver to *only* allow + access to the given zone. Creating and listing disks from other zones will be + denied. If a zone is not specified, the zone from the client Instance ID will + be used when creating new disks. +* The `defaultDiskType` parameter is optional, and specified what type of disk + to create, either `pd-standard` or `pd-ssd`. When not specified, the default + is `pd-ssd`. +* The `tag` parameter is optional, and causes the driver to create or return + disks that have a matching tag. The tag is implemented by serializing a JSON + structure in to the `Description` field of a GCE disk. Use of this parameter + is encouraged, as the driver will only return volumes that have been created + by the driver, which is most useful to eliminate listing the boot disks of + every GCE disk in your project/zone. + +### Runtime behavior + +* The GCEPD driver enforces the GCE requirements for disk sizing and naming. + Disks must be created with a minimum size of 10GB. Disk names must adhere to + the regular expression of [a-z]([-a-z0-9]*[a-z0-9])?, which means the first + character must be a lowercase letter, and all following characters must be a + dash, lowercase letter, or digit, except the last character, which cannot be a + dash. +* If the `zone` parameter is not specified in the driver configuration, and a + request is received to list all volumes that does not specify a zone in the + InstanceID header, volumes from all zones will be returned. +* By default, all disks will be created with type `pd-ssd`, which creates an SSD + based disk. If you wish to created disks that are not SSD-based, change the + default via the driver config, or the type can be changed at creation time by + using the `Type` field of the create request. + +### Activating the Driver + +To activate the GCEPD driver please follow the instructions for +[activating storage drivers](./config.md#storage-drivers), using `gcepd` as the +driver name. + +### Troubleshooting + +* Make sure that the JSON credentials file as specified in the `keyfile` + configuration parameter is present and accessible. + +### Examples + +Below is a full `config.yml` that works with GCE + +```yaml +libstorage: + server: + services: + gcepd: + driver: gcepd + gcepd: + keyfile: /etc/gcekey.json + tag: rexray +``` + +### Caveats + +* Snapshot and copy functionality is not yet implemented +* Most GCE instances can have up to 64 TB of total persistent disk space + attached. Shared-core machine types or custom machine types with less than + 3.75 GB of memory are limited to 3 TB of total persistent disk space. Total + persistent disk space for an instance includes the size of the root persistent + disk. You can attach up to 16 independent persistent disks to most instances, + but instances with shared-core machine types or custom machine types with less + than 3.75 GB of memory are limited to a maximum of 4 persistent disks, + including the root persistent disk. See + [GCE Disks](https://cloud.google.com/compute/docs/disks/) docs for more + details. diff --git a/Makefile b/Makefile index 9e291957..6de1d1fc 100644 --- a/Makefile +++ b/Makefile @@ -1075,12 +1075,22 @@ test: $(MAKE) -j parallel-test test-debug: - env LIBSTORAGE_DEBUG=true $(MAKE) test + LIBSTORAGE_DEBUG=true $(MAKE) test test-rbd: DRIVERS=rbd $(MAKE) deps DRIVERS=rbd $(MAKE) ./drivers/storage/rbd/tests/rbd.test +test-rbd-clean: + DRIVERS=rbd $(MAKE) clean + +test-gcepd: + DRIVERS=gcepd $(MAKE) deps + DRIVERS=gcepd $(MAKE) ./drivers/storage/gcepd/tests/gcepd.test + +test-gcepd-clean: + DRIVERS=gcepd $(MAKE) clean + clean: $(GO_CLEAN) clobber: clean $(GO_CLOBBER) diff --git a/drivers/storage/gcepd/.gitignore b/drivers/storage/gcepd/.gitignore new file mode 100644 index 00000000..8000dd9d --- /dev/null +++ b/drivers/storage/gcepd/.gitignore @@ -0,0 +1 @@ +.vagrant diff --git a/drivers/storage/gcepd/executor/gce_executor.go b/drivers/storage/gcepd/executor/gce_executor.go new file mode 100644 index 00000000..08536857 --- /dev/null +++ b/drivers/storage/gcepd/executor/gce_executor.go @@ -0,0 +1,109 @@ +// +build !libstorage_storage_executor libstorage_storage_executor_gcepd + +package executor + +import ( + "io/ioutil" + "path" + "regexp" + + gofig "github.com/akutz/gofig/types" + + "github.com/codedellemc/libstorage/api/registry" + "github.com/codedellemc/libstorage/api/types" + "github.com/codedellemc/libstorage/drivers/storage/gcepd" + gceUtils "github.com/codedellemc/libstorage/drivers/storage/gcepd/utils" +) + +const ( + diskIDPath = "/dev/disk/by-id" + diskPrefix = "google-" +) + +// driver is the storage executor for the storage driver. +type driver struct { + config gofig.Config +} + +func init() { + registry.RegisterStorageExecutor(gcepd.Name, newDriver) +} + +func newDriver() types.StorageExecutor { + return &driver{} +} + +func (d *driver) Init(ctx types.Context, config gofig.Config) error { + d.config = config + return nil +} + +func (d *driver) Name() string { + return gcepd.Name +} + +// Supported returns a flag indicating whether or not the platform +// implementing the executor is valid for the host on which the executor +// resides. +func (d *driver) Supported( + ctx types.Context, + opts types.Store) (bool, error) { + + return gceUtils.IsGCEInstance(ctx) +} + +// InstanceID returns the instance ID from the current instance from metadata +func (d *driver) InstanceID( + ctx types.Context, + opts types.Store) (*types.InstanceID, error) { + + return gceUtils.InstanceID(ctx) +} + +// NextDevice returns the next available device. +func (d *driver) NextDevice( + ctx types.Context, + opts types.Store) (string, error) { + + return "", types.ErrNotImplemented +} + +// Retrieve device paths currently attached and/or mounted +func (d *driver) LocalDevices( + ctx types.Context, + opts *types.LocalDevicesOpts) (*types.LocalDevices, error) { + + files, err := ioutil.ReadDir(diskIDPath) + if err != nil { + return nil, err + } + + persistentDiskRX, err := regexp.Compile( + diskPrefix + `(` + gceUtils.DiskNameRX + `)`) + if err != nil { + return nil, err + } + + attachedDisks, err := gceUtils.GetDisks(ctx) + if err != nil { + return nil, err + } + + ld := &types.LocalDevices{Driver: d.Name()} + devMap := map[string]string{} + for _, f := range files { + if persistentDiskRX.MatchString(f.Name()) { + matches := persistentDiskRX.FindStringSubmatch(f.Name()) + volID := matches[1] + if _, ok := attachedDisks[volID]; ok && volID != "" { + devMap[volID] = path.Join(diskIDPath, f.Name()) + } + } + } + + if len(devMap) > 0 { + ld.DeviceMap = devMap + } + + return ld, nil +} diff --git a/drivers/storage/gcepd/gcepd.go b/drivers/storage/gcepd/gcepd.go new file mode 100644 index 00000000..54bf3547 --- /dev/null +++ b/drivers/storage/gcepd/gcepd.go @@ -0,0 +1,44 @@ +// +build !libstorage_storage_driver libstorage_storage_driver_gcepd + +package gcepd + +import ( + gofigCore "github.com/akutz/gofig" + gofig "github.com/akutz/gofig/types" +) + +const ( + // Name is the provider's name. + Name = "gcepd" + + // InstanceIDFieldProjectID is the key to retrieve the ProjectID value + // from the InstanceID Field map. + InstanceIDFieldProjectID = "projectID" + + // InstanceIDFieldZone is the key to retrieve the zone value from the + // InstanceID Field map. + InstanceIDFieldZone = "zone" + + // DiskTypeSSD indicates an SSD based disk should be created + DiskTypeSSD = "pd-ssd" + + // DiskTypeStandard indicates a standard (non-SSD) disk + DiskTypeStandard = "pd-standard" + + // DefaultDiskType indicates what type of disk to create by default + DefaultDiskType = DiskTypeSSD +) + +func init() { + r := gofigCore.NewRegistration("GCE") + r.Key(gofig.String, "", "", + "Required: JSON keyfile for service account", "gcepd.keyfile") + r.Key(gofig.String, "", "", + "If defined, limit GCE access to given zone", "gcepd.zone") + r.Key(gofig.String, "", DefaultDiskType, "Default GCE disk type", + "gcepd.defaultDiskType") + r.Key(gofig.String, "", "", "Tag to apply and filter disks", + "gcepd.tag") + + gofigCore.Register(r) +} diff --git a/drivers/storage/gcepd/storage/gce_storage.go b/drivers/storage/gcepd/storage/gce_storage.go new file mode 100644 index 00000000..80bca2df --- /dev/null +++ b/drivers/storage/gcepd/storage/gce_storage.go @@ -0,0 +1,987 @@ +// +build !libstorage_storage_driver libstorage_storage_driver_gcepd + +package storage + +import ( + "crypto/md5" + "encoding/json" + "fmt" + "hash" + "io/ioutil" + "os" + "path/filepath" + "regexp" + "strings" + "sync" + "time" + + log "github.com/Sirupsen/logrus" + + gofig "github.com/akutz/gofig/types" + goof "github.com/akutz/goof" + + "github.com/codedellemc/libstorage/api/context" + "github.com/codedellemc/libstorage/api/registry" + "github.com/codedellemc/libstorage/api/types" + "github.com/codedellemc/libstorage/drivers/storage/gcepd" + "github.com/codedellemc/libstorage/drivers/storage/gcepd/utils" + + "golang.org/x/oauth2/google" + compute "google.golang.org/api/compute/v0.beta" + "google.golang.org/api/googleapi" +) + +const ( + cacheKeyC = "cacheKey" + tagKey = "libstoragetag" + minDiskSizeGB = 10 +) + +var ( + // GCE labels have to start with a lowercase letter, and have to end + // with a lowercase letter or numeral. In between can be lowercase + // letters, numbers or dashes + tagRegex = regexp.MustCompile(`^[a-z](?:[a-z0-9\-]*[a-z0-9])?$`) +) + +type driver struct { + config gofig.Config + keyFile string + projectID *string + zone string + defaultDiskType string + tag string +} + +func init() { + registry.RegisterStorageDriver(gcepd.Name, newDriver) +} + +func newDriver() types.StorageDriver { + return &driver{} +} + +func (d *driver) Name() string { + return gcepd.Name +} + +// Init initializes the driver. +func (d *driver) Init(context types.Context, config gofig.Config) error { + d.config = config + d.keyFile = d.config.GetString("gcepd.keyfile") + if d.keyFile == "" { + return goof.New("GCE service account keyfile is required") + } + if !filepath.IsAbs(d.keyFile) { + cwd, err := os.Getwd() + if err != nil { + return goof.New("Unable to determine CWD") + } + d.keyFile = filepath.Join(cwd, d.keyFile) + } + d.zone = d.config.GetString("gcepd.zone") + + if d.zone != "" { + context.Infof("All access is restricted to zone: %s", d.zone) + } + + pID, err := d.extractProjectID() + if err != nil || pID == nil || *pID == "" { + return goof.New("Unable to set project ID from keyfile") + } + d.projectID = pID + + d.defaultDiskType = config.GetString("gcepd.defaultDiskType") + + switch d.defaultDiskType { + case gcepd.DiskTypeSSD, gcepd.DiskTypeStandard: + // noop + case "": + d.defaultDiskType = gcepd.DefaultDiskType + default: + return goof.Newf( + "Invalid GCE disk type: %s", d.defaultDiskType) + } + + d.tag = config.GetString("gcepd.tag") + if d.tag != "" && !tagRegex.MatchString(d.tag) { + return goof.New("Invalid GCE tag format") + } + + context.Info("storage driver initialized") + return nil +} + +var ( + sessions = map[string]*compute.Service{} + sessionsL = &sync.Mutex{} +) + +func writeHkey(h hash.Hash, ps *string) { + if ps == nil { + return + } + h.Write([]byte(*ps)) +} + +func (d *driver) Login(ctx types.Context) (interface{}, error) { + sessionsL.Lock() + defer sessionsL.Unlock() + + var ( + ckey string + hkey = md5.New() + ) + + // Unique connections to google APIs are based on project ID + // Project ID is embedded in the service account key JSON + writeHkey(hkey, d.projectID) + ckey = fmt.Sprintf("%x", hkey.Sum(nil)) + + // if the session is cached then return it + if svc, ok := sessions[ckey]; ok { + ctx.WithField(cacheKeyC, ckey).Debug("using cached gce service") + return svc, nil + } + + fields := map[string]interface{}{ + cacheKeyC: ckey, + "keyfile": d.keyFile, + "projectID": *d.projectID, + } + + serviceAccountJSON, err := d.getKeyFileJSON() + if err != nil { + ctx.WithFields(fields).Errorf( + "Could not read service account credentials file: %s", + err) + return nil, err + } + + config, err := google.JWTConfigFromJSON( + serviceAccountJSON, + compute.ComputeScope, + ) + if err != nil { + ctx.WithFields(fields).Errorf( + "Could not create JWT Config From JSON: %s", err) + return nil, err + } + + client, err := compute.New(config.Client(ctx)) + if err != nil { + ctx.WithFields(fields).Errorf( + "Could not create GCE service connection: %s", err) + return nil, err + } + + sessions[ckey] = client + ctx.Info("GCE service connection created and cached") + return client, nil +} + +// NextDeviceInfo returns the information about the driver's next available +// device workflow. +func (d *driver) NextDeviceInfo( + ctx types.Context) (*types.NextDeviceInfo, error) { + return nil, nil +} + +// Type returns the type of storage the driver provides. +func (d *driver) Type(ctx types.Context) (types.StorageType, error) { + return types.Block, nil +} + +// InstanceInspect returns an instance. +func (d *driver) InstanceInspect( + ctx types.Context, + opts types.Store) (*types.Instance, error) { + + iid := context.MustInstanceID(ctx) + return &types.Instance{ + InstanceID: iid, + }, nil +} + +// Volumes returns all volumes or a filtered list of volumes. +func (d *driver) Volumes( + ctx types.Context, + opts *types.VolumesOpts) ([]*types.Volume, error) { + + var gceDisks []*compute.Disk + var err error + vols := []*types.Volume{} + + zone, err := d.validZone(ctx) + if err != nil { + return nil, err + } + + if zone != nil && *zone != "" { + // get list of disks in zone from GCE + gceDisks, err = d.getDisks(ctx, zone) + } else { + // without a zone, get disks in all zones + gceDisks, err = d.getAggregatedDisks(ctx) + } + + if err != nil { + ctx.Errorf("Unable to get disks from GCE API") + return nil, goof.WithError( + "Unable to get disks from GCE API", err) + } + + // shortcut early if nothing is returned + // TODO: is it an error if there are no volumes? EBS driver returns an + // error, but other drivers (ScaleIO, RBD) return an empty slice + if len(gceDisks) == 0 { + return vols, nil + } + + // convert GCE disks to libstorage types.Volume + vols, err = d.toTypeVolume(ctx, gceDisks, opts.Attachments, zone) + if err != nil { + return nil, goof.WithError("error converting to types.Volume", + err) + } + + return vols, nil +} + +// VolumeInspect inspects a single volume. +func (d *driver) VolumeInspect( + ctx types.Context, + volumeID string, + opts *types.VolumeInspectOpts) (*types.Volume, error) { + + zone, err := d.validZone(ctx) + if err != nil { + return nil, err + } + + if zone == nil || *zone == "" { + return nil, goof.New("Zone is required for VolumeInspect") + } + + gceDisk, err := d.getDisk(ctx, zone, &volumeID) + if err != nil { + ctx.Errorf("Unable to get disk from GCE API") + return nil, goof.WithError( + "Unable to get disk from GCE API", err) + } + if gceDisk == nil { + return nil, goof.New("No Volume Found") + } + + gceDisks := []*compute.Disk{gceDisk} + vols, err := d.toTypeVolume(ctx, gceDisks, opts.Attachments, zone) + if err != nil { + return nil, goof.WithError("error converting to types.Volume", + err) + } + + return vols[0], nil +} + +// VolumeCreate creates a new volume. +func (d *driver) VolumeCreate( + ctx types.Context, + volumeName string, + opts *types.VolumeCreateOpts) (*types.Volume, error) { + + fields := map[string]interface{}{ + "driverName": d.Name(), + "volumeName": volumeName, + "opts": opts, + } + + zone, err := d.validZone(ctx) + if err != nil { + return nil, err + } + + // No zone information from request, driver, or IID + if (zone == nil || *zone == "") && (opts.AvailabilityZone == nil || *opts.AvailabilityZone == "") { + return nil, goof.New("Zone is required for VolumeCreate") + } + + if zone != nil && *zone != "" { + if opts.AvailabilityZone != nil && *opts.AvailabilityZone != "" { + // If request and driver/IID have zone, they must match + if *zone != *opts.AvailabilityZone { + return nil, goof.WithFields(fields, + "Cannot create volume in given zone") + } + } else { + // Set the zone to the driver/IID config + opts.AvailabilityZone = zone + } + + } + + if !utils.IsValidDiskName(&volumeName) { + return nil, goof.WithFields(fields, + "Volume name does not meet GCE naming requirements") + } + + ctx.WithFields(fields).Debug("creating volume") + + // Check if volume with same name exists + gceDisk, err := d.getDisk(ctx, opts.AvailabilityZone, &volumeName) + if err != nil { + return nil, goof.WithFieldsE(fields, + "error querying for existing volume", err) + } + if gceDisk != nil { + return nil, goof.WithFields(fields, + "volume name already exists") + } + + err = d.createVolume(ctx, &volumeName, opts) + if err != nil { + return nil, goof.WithFieldsE( + fields, "error creating volume", err) + } + + // Return the volume created + return d.VolumeInspect(ctx, volumeName, + &types.VolumeInspectOpts{ + Attachments: types.VolAttNone, + }, + ) +} + +// VolumeCreateFromSnapshot creates a new volume from an existing snapshot. +func (d *driver) VolumeCreateFromSnapshot( + ctx types.Context, + snapshotID, volumeName string, + opts *types.VolumeCreateOpts) (*types.Volume, error) { + + return nil, types.ErrNotImplemented +} + +// VolumeCopy copies an existing volume. +func (d *driver) VolumeCopy( + ctx types.Context, + volumeID, volumeName string, + opts types.Store) (*types.Volume, error) { + + return nil, types.ErrNotImplemented +} + +// VolumeSnapshot snapshots a volume. +func (d *driver) VolumeSnapshot( + ctx types.Context, + volumeID, snapshotName string, + opts types.Store) (*types.Snapshot, error) { + + return nil, types.ErrNotImplemented +} + +// VolumeRemove removes a volume. +func (d *driver) VolumeRemove( + ctx types.Context, + volumeID string, + opts *types.VolumeRemoveOpts) error { + + zone, err := d.validZone(ctx) + if err != nil { + return err + } + + if zone == nil || *zone == "" { + return goof.New("Zone is required for VolumeRemove") + } + + // TODO: check if disk is still attached first + asyncOp, err := mustSession(ctx).Disks.Delete( + *d.projectID, *zone, volumeID).Do() + if err != nil { + return goof.WithError("Failed to initiate disk deletion", err) + } + + err = d.waitUntilOperationIsFinished( + ctx, zone, asyncOp) + if err != nil { + return err + } + + return nil +} + +// VolumeAttach attaches a volume and provides a token clients can use +// to validate that device has appeared locally. +func (d *driver) VolumeAttach( + ctx types.Context, + volumeID string, + opts *types.VolumeAttachOpts) (*types.Volume, string, error) { + + zone, err := d.validZone(ctx) + if err != nil { + return nil, "", err + } + + if zone == nil || *zone == "" { + return nil, "", goof.New("Zone is required for VolumeAttach") + } + + instanceName := context.MustInstanceID(ctx).ID + gceInst, err := d.getInstance(ctx, zone, &instanceName) + if err != nil { + return nil, "", err + } + if gceInst == nil { + return nil, "", goof.New("Instance to attach to not found") + } + + // Check if volume is already attached somewhere, if so, force detach? + gceDisk, err := d.getDisk(ctx, zone, &volumeID) + if err != nil { + return nil, "", err + } + if gceDisk == nil { + return nil, "", goof.New("Volume not found") + } + + if len(gceDisk.Users) > 0 { + if !opts.Force { + return nil, "", goof.New( + "Volume already attached to different host") + } + ctx.Info("Automatically detaching volume from other instance") + err = d.detachVolume(ctx, gceDisk) + if err != nil { + return nil, "", goof.WithError( + "Error detaching volume during force attach", + err) + } + } + + err = d.attachVolume(ctx, &instanceName, zone, &volumeID) + if err != nil { + return nil, "", err + } + + vol, err := d.VolumeInspect( + ctx, volumeID, &types.VolumeInspectOpts{ + Attachments: types.VolAttReq, + Opts: opts.Opts, + }, + ) + if err != nil { + return nil, "", goof.WithError("Error getting volume", err) + } + + return vol, volumeID, nil +} + +// VolumeDetach detaches a volume. +func (d *driver) VolumeDetach( + ctx types.Context, + volumeID string, + opts *types.VolumeDetachOpts) (*types.Volume, error) { + + zone, err := d.validZone(ctx) + if err != nil { + return nil, err + } + + if zone == nil || *zone == "" { + return nil, goof.New("Zone is required for VolumeDetach") + } + + // Check if volume is attached at all + gceDisk, err := d.getDisk(ctx, zone, &volumeID) + if err != nil { + return nil, err + } + if gceDisk == nil { + return nil, goof.New("Volume not found") + } + + if len(gceDisk.Users) == 0 { + return nil, goof.New("Volume already detached") + } + + err = d.detachVolume(ctx, gceDisk) + if err != nil { + return nil, goof.WithError("Error detaching disk", err) + } + + vol, err := d.VolumeInspect( + ctx, volumeID, &types.VolumeInspectOpts{ + Attachments: types.VolAttReq, + Opts: opts.Opts, + }, + ) + if err != nil { + return nil, goof.WithError("Error getting volume", err) + } + + return vol, nil +} + +// Snapshots returns all volumes or a filtered list of snapshots. +func (d *driver) Snapshots( + ctx types.Context, + opts types.Store) ([]*types.Snapshot, error) { + + return nil, types.ErrNotImplemented +} + +// SnapshotInspect inspects a single snapshot. +func (d *driver) SnapshotInspect( + ctx types.Context, + snapshotID string, + opts types.Store) (*types.Snapshot, error) { + + return nil, types.ErrNotImplemented +} + +// SnapshotCopy copies an existing snapshot. +func (d *driver) SnapshotCopy( + ctx types.Context, + snapshotID, snapshotName, destinationID string, + opts types.Store) (*types.Snapshot, error) { + + return nil, types.ErrNotImplemented +} + +// SnapshotRemove removes a snapshot. +func (d *driver) SnapshotRemove( + ctx types.Context, + snapshotID string, + opts types.Store) error { + + return types.ErrNotImplemented +} + +/////////////////////////////////////////////////////////////////////// +///////// HELPER FUNCTIONS SPECIFIC TO PROVIDER ///////// +/////////////////////////////////////////////////////////////////////// + +func (d *driver) getKeyFileJSON() ([]byte, error) { + serviceAccountJSON, err := ioutil.ReadFile(d.keyFile) + if err != nil { + log.Errorf( + "Could not read credentials file: %s, %s", + d.keyFile, err) + return nil, err + } + return serviceAccountJSON, nil +} + +type keyData struct { + ProjectID string `json:"project_id"` +} + +func (d *driver) extractProjectID() (*string, error) { + keyJSON, err := d.getKeyFileJSON() + if err != nil { + return nil, err + } + + data := keyData{} + + err = json.Unmarshal(keyJSON, &data) + if err != nil { + return nil, err + } + + return &data.ProjectID, nil +} + +func getClientProjectID(ctx types.Context) (*string, bool) { + if iid, ok := context.InstanceID(ctx); ok { + if v, ok := iid.Fields[gcepd.InstanceIDFieldProjectID]; ok { + return &v, v != "" + } + } + return nil, false +} + +func getClientZone(ctx types.Context) (*string, bool) { + if iid, ok := context.InstanceID(ctx); ok { + if v, ok := iid.Fields[gcepd.InstanceIDFieldZone]; ok { + return &v, v != "" + } + } + return nil, false + +} + +func mustSession(ctx types.Context) *compute.Service { + return context.MustSession(ctx).(*compute.Service) +} + +func (d *driver) getDisks( + ctx types.Context, + zone *string) ([]*compute.Disk, error) { + + diskListQ := mustSession(ctx).Disks.List(*d.projectID, *zone) + if d.tag != "" { + filter := fmt.Sprintf("labels.%s eq %s", tagKey, d.tag) + ctx.Debugf("query filter: %s", filter) + diskListQ.Filter(filter) + } + + diskList, err := diskListQ.Do() + if err != nil { + ctx.Errorf("Error listing disks: %s", err) + return nil, err + } + + return diskList.Items, nil +} + +func (d *driver) getAggregatedDisks( + ctx types.Context) ([]*compute.Disk, error) { + + aggListQ := mustSession(ctx).Disks.AggregatedList(*d.projectID) + if d.tag != "" { + filter := fmt.Sprintf("labels.%s eq %s", tagKey, d.tag) + ctx.Debugf("query filter: %s", filter) + aggListQ.Filter(filter) + } + + aggList, err := aggListQ.Do() + if err != nil { + ctx.Errorf("Error listing aggregated disks: %s", err) + return nil, err + } + + disks := []*compute.Disk{} + + for _, diskList := range aggList.Items { + if diskList.Disks != nil && len(diskList.Disks) > 0 { + disks = append(disks, diskList.Disks...) + } + } + + return disks, nil +} + +func (d *driver) getDisk( + ctx types.Context, + zone *string, + name *string) (*compute.Disk, error) { + + disk, err := mustSession(ctx).Disks.Get(*d.projectID, *zone, *name).Do() + if err != nil { + if apiE, ok := err.(*googleapi.Error); ok { + if apiE.Code == 404 { + return nil, nil + } + } + ctx.Errorf("Error getting disk: %s", err) + return nil, err + } + + return disk, nil +} + +func (d *driver) getInstance( + ctx types.Context, + zone *string, + name *string) (*compute.Instance, error) { + + inst, err := mustSession(ctx).Instances.Get(*d.projectID, *zone, *name).Do() + if err != nil { + if apiE, ok := err.(*googleapi.Error); ok { + if apiE.Code == 404 { + return nil, nil + } + } + ctx.Errorf("Error getting instance: %s", err) + return nil, err + } + + return inst, nil +} + +func (d *driver) toTypeVolume( + ctx types.Context, + disks []*compute.Disk, + attachments types.VolumeAttachmentsTypes, + zone *string) ([]*types.Volume, error) { + + var ( + ld *types.LocalDevices + ldOK bool + ) + + if attachments.Devices() { + // Get local devices map from context + // Check for presence because this is required by the API, even + // though we don't actually need this data + if ld, ldOK = context.LocalDevices(ctx); !ldOK { + return nil, goof.New( + "error getting local devices from context") + } + } + + lsVolumes := make([]*types.Volume, len(disks)) + + for i, disk := range disks { + volume := &types.Volume{ + Name: disk.Name, + ID: disk.Name, + AvailabilityZone: utils.GetIndex(disk.Zone), + Status: disk.Status, + Type: utils.GetIndex(disk.Type), + Size: disk.SizeGb, + } + + if attachments.Requested() { + attachment := getAttachment(disk, attachments, ld) + if attachment != nil { + volume.Attachments = attachment + } + + } + + lsVolumes[i] = volume + } + + return lsVolumes, nil +} + +func (d *driver) validZone(ctx types.Context) (*string, error) { + // Is there a zone in the IID header? + zone, ok := getClientZone(ctx) + if ok { + // Since there is a zone in the IID header, we only allow access + // to volumes from that zone. If driver has restricted + // access to a specific zone, client zone must match + if d.zone != "" && *zone != d.zone { + return nil, goof.New("No access to given zone") + } + return zone, nil + } + // No zone in the header, so access depends on how the driver + // is configured + if d.zone != "" { + return &d.zone, nil + } + return nil, nil +} + +func getAttachment( + disk *compute.Disk, + attachments types.VolumeAttachmentsTypes, + ld *types.LocalDevices) []*types.VolumeAttachment { + + var volAttachments []*types.VolumeAttachment + + for _, link := range disk.Users { + att := &types.VolumeAttachment{ + VolumeID: disk.Name, + InstanceID: &types.InstanceID{ + ID: utils.GetIndex(link), + Driver: gcepd.Name, + }, + } + if attachments.Devices() { + if dev, ok := ld.DeviceMap[disk.Name]; ok { + att.DeviceName = dev + // TODO: Do we need to enforce that the zone + // found in link matches the zone for the volume? + } + } + volAttachments = append(volAttachments, att) + } + return volAttachments +} + +func (d *driver) createVolume( + ctx types.Context, + volumeName *string, + opts *types.VolumeCreateOpts) error { + + if opts.Size == nil || *opts.Size < minDiskSizeGB { + return goof.Newf("Minimum disk size is %d GB", minDiskSizeGB) + } + + diskType := d.defaultDiskType + if opts.Type != nil && *opts.Type != "" { + if strings.EqualFold(gcepd.DiskTypeSSD, *opts.Type) { + diskType = gcepd.DiskTypeSSD + } else if strings.EqualFold(gcepd.DiskTypeStandard, *opts.Type) { + diskType = gcepd.DiskTypeStandard + } + } + diskTypeURI := fmt.Sprintf("zones/%s/diskTypes/%s", + *opts.AvailabilityZone, diskType) + + createDisk := &compute.Disk{ + Name: *volumeName, + SizeGb: *opts.Size, + Type: diskTypeURI, + } + + asyncOp, err := mustSession(ctx).Disks.Insert( + *d.projectID, *opts.AvailabilityZone, createDisk).Do() + if err != nil { + return goof.WithError("Failed to initiate disk creation", err) + } + + err = d.waitUntilOperationIsFinished( + ctx, opts.AvailabilityZone, asyncOp) + if err != nil { + return err + } + + if d.tag != "" { + /* In order to set the labels on a disk, we have to query the + disk first in order to get the generated label fingerprint + */ + disk, err := d.getDisk(ctx, opts.AvailabilityZone, volumeName) + if err != nil { + ctx.WithError(err).Warn( + "Unable to query disk for labeling") + return nil + } + labels := getLabels(&d.tag) + _, err = mustSession(ctx).Disks.SetLabels( + *d.projectID, *opts.AvailabilityZone, *volumeName, + &compute.ZoneSetLabelsRequest{ + Labels: labels, + LabelFingerprint: disk.LabelFingerprint, + }).Do() + if err != nil { + ctx.WithError(err).Warn("Unable to label disk") + } + } + + return nil +} + +func (d *driver) waitUntilOperationIsFinished( + ctx types.Context, + zone *string, + operation *compute.Operation) error { + + opName := operation.Name +OpLoop: + for { + time.Sleep(100 * time.Millisecond) + op, err := mustSession(ctx).ZoneOperations.Get( + *d.projectID, *zone, opName).Do() + if err != nil { + return err + } + + switch op.Status { + case "PENDING", "RUNNING": + continue + case "DONE": + if op.Error != nil { + bytea, _ := op.Error.MarshalJSON() + return goof.New(string(bytea)) + } + break OpLoop + default: + return goof.Newf("Unknown status %q: %+v", + op.Status, op) + } + } + return nil +} + +func (d *driver) attachVolume( + ctx types.Context, + instanceID *string, + zone *string, + volumeName *string) error { + + disk := &compute.AttachedDisk{ + AutoDelete: false, + Boot: false, + Source: fmt.Sprintf("zones/%s/disks/%s", *zone, *volumeName), + DeviceName: *volumeName, + } + + asyncOp, err := mustSession(ctx).Instances.AttachDisk( + *d.projectID, *zone, *instanceID, disk).Do() + if err != nil { + return err + } + + err = d.waitUntilOperationIsFinished(ctx, zone, asyncOp) + if err != nil { + return err + } + + return nil +} + +func getBlockDevice(volumeID *string) string { + return fmt.Sprintf("/dev/disk/by-id/google-%s", *volumeID) +} + +func (d *driver) getAttachedDeviceName( + ctx types.Context, + zone *string, + instanceName *string, + diskLink *string) (string, error) { + + gceInstance, err := d.getInstance(ctx, zone, instanceName) + if err != nil { + return "", err + } + + for _, disk := range gceInstance.Disks { + if disk.Source == *diskLink { + return disk.DeviceName, nil + } + } + + return "", goof.New("Unable to find attached volume on instance") +} + +func (d *driver) detachVolume( + ctx types.Context, + gceDisk *compute.Disk) error { + + var ops = make([]*compute.Operation, 0) + var asyncErr error + + zone := utils.GetIndex(gceDisk.Zone) + + for _, user := range gceDisk.Users { + instanceName := utils.GetIndex(user) + devName, err := d.getAttachedDeviceName(ctx, &zone, &instanceName, + &gceDisk.SelfLink) + if err != nil { + return goof.WithError( + "Unable to get device name from instance", err) + } + asyncOp, err := mustSession(ctx).Instances.DetachDisk( + *d.projectID, zone, instanceName, devName).Do() + if err != nil { + asyncErr = goof.WithError("Error detaching disk", err) + continue + } + ops = append(ops, asyncOp) + } + + if len(ops) > 0 { + for _, op := range ops { + err := d.waitUntilOperationIsFinished(ctx, + &zone, op) + if err != nil { + return err + } + } + } + + return asyncErr +} + +func getLabels(tag *string) map[string]string { + labels := map[string]string{ + tagKey: *tag, + } + + return labels +} diff --git a/drivers/storage/gcepd/tests/README.md b/drivers/storage/gcepd/tests/README.md new file mode 100644 index 00000000..34aab795 --- /dev/null +++ b/drivers/storage/gcepd/tests/README.md @@ -0,0 +1,113 @@ +# GCE Persistent Disk Driver Testing +This package includes two different kinds of tests for the GCE Persistent Disk +storage driver: + +Test Type | Description +----------|------------ +Unit/Integration | The unit/integration tests are provided and executed via the standard Go test pattern, in a file named `gce_test.go`. These tests are designed to test the storage driver's and executor's functions at a low-level, ensuring, given the proper input, the expected output is received. +Test Execution Plan | The test execution plan operates above the code-level, using a Vagrantfile to deploy a complete implementation of the GCE storage driver in order to run real-world, end-to-end test scenarios. + +## Unit/Integration Tests +The unit/integration tests must be executed on a node that is hosted within GCE. +In order to execute the tests either compile the test binary locally or +on the instance. From the root of the libStorage project execute the following: + +```bash +GOOS=linux make test-gcepd +``` + +Once the test binary is compiled, if it was built locally, copy it to the GCE +instance. You will also need to copy the JSON file with your service account +credentials. + +Using an SSH session to connect to the GCE instance, please export the required +GCE credentials used by the GCE storage driver: + +```bash +export GCE_KEYFILE=/etc/gcekey.json +``` + +The tests may now be executed with the following command: + +```bash +sudo ./gcepd.test +``` + +An exit code of `0` means the tests completed successfully. If there are errors +then it may be useful to run the tests once more with increased logging: + +```bash +sudo LIBSTORAGE_LOGGING_LEVEL=debug ./gcepd.test -test.v +``` + +## Test Execution Plan +In addition to the low-level unit/integration tests, the GCE storage driver +provides a test execution plan automated with Vagrant: + +``` +vagrant up --provider=google --no-parallel +``` + +The above command brings up a Vagrant environment using GCE instances in order +to test the GCE driver. If the command completes successfully then the +environment was brought online without issue and indicates that the test +execution plan succeeded as well. + +The *--no-parallel* flag is important, as the tests are written such that tests +on one node are supposed to run and finished before the next set of tests. + +The following sections outline dependencies, settings, and different execution +scenarios that may be required or useful for using the Vagrantfile. + +### Test Plan Dependencies +The following dependencies are required in order to execute the included test +execution plan: + + * [Vagrant](https://www.vagrantup.com/) 1.8.4+ + * [vagrant-google](https://github.com/mitchellh/vagrant-google) + +Once Vagrant is installed the required plug-ins may be installed with the +following commands: + +```bash +vagrant plugin install vagrant-google +``` + +### Test Plan Settings +The following environment variables may be used to configure the `Vagrantfile`. + +Environment Variable | Description | Required | Default +---------------------|-------------|:--------:|-------- +`GCE_PROJECT_ID` | The GCE Project ID | ✓ | +`GCE_CLIENT_EMAIL` | The email address of the service account holder | ✓ | +`GCE_JSON_KEY` | The location of the GCE credentials file on your local machine | ✓ | +`GCE_MACHINE_TYPE` | The GCE machine type to use | | n1-standard-1 +`GCE_IMAGE` | The GCE disk image to boot from | | centos-7-v20170110 +`GCE_ZONE` | The GCE zone to launch instance within | | us-west1-b +`REMOTE_USER` | The account name to SSH to the GCE node as | | *The local user* (.e.g. `whoami`) +`REMOTE_SSH_KEY` | The location of the private SSH key to use for SSH into the GCE node | | ~/.ssh/id_rsa + +### Test Plan Nodes +The `Vagrantfile` deploys two GCE/rexray clients with Docker named: + + * libstorage-gce-test0 + * libstorage-gce-test1 + + +### Test Plan Scripts +This package includes test scripts that execute the test plan: + + * `client0-tests.sh` + * `client1-tests.sh` + +The above files are copied to their respective instances and executed +as soon as the instance is online. + +### Test Plan Cleanup +Once the test plan has been executed, successfully or otherwise, it's important +to remember to clean up the GCE resources that may have been created along the +way. To do so simply execute the following command: + +```bash +vagrant destroy -f +``` diff --git a/drivers/storage/gcepd/tests/Vagrantfile b/drivers/storage/gcepd/tests/Vagrantfile new file mode 100644 index 00000000..eb33b703 --- /dev/null +++ b/drivers/storage/gcepd/tests/Vagrantfile @@ -0,0 +1,243 @@ +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# GCE config +$GCE_PROJECT_ID = ENV['GCE_PROJECT_ID'] +$GCE_CLIENT_EMAIL = ENV['GCE_CLIENT_EMAIL'] +$GCE_JSON_KEY = ENV['GCE_JSON_KEY'] +$GCE_MACHINE_TYPE = ENV['GCE_MACHINE_TYPE'] ? ENV['GCE_MACHINE_TYPE'] : "n1-standard-1" +$GCE_IMAGE = ENV['GCE_IMAGE'] ? ENV['GCE_IMAGE'] : "centos-7-v20170110" +$GCE_ZONE = ENV['GCE_ZONE'] ? ENV['GCE_ZONE'] : "us-west1-b" +$REMOTE_USER = ENV['GCE_USER'] ? ENV['GCE_USER'] : ENV['USER'] +$REMOTE_SSH_KEY = ENV['GCE_USER_SSH_KEY'] ? ENV['GCE_USER_SSH_KEY'] : "~/.ssh/id_rsa" + +# Flags to control which REX-Ray to install +# one of these must be set to true +$install_latest_stable_rex = false +$install_latest_staged_rex = true +$install_rex_from_source = false + +$node0_name = "libstorage-gce-test0" +$node1_name = "libstorage-gce-test1" + +# Script to install node prerequisites +$install_prereqs = <