From a6ec18603f70f904e25d23df2c5269c9b7957f79 Mon Sep 17 00:00:00 2001 From: Travis Rhoden Date: Tue, 13 Dec 2016 11:01:13 -0700 Subject: [PATCH] Add docs for RBD driver --- .docs/user-guide/config.md | 6 +- .docs/user-guide/storage-providers.md | 86 +++++++++++++++++++++++++++ 2 files changed, 90 insertions(+), 2 deletions(-) diff --git a/.docs/user-guide/config.md b/.docs/user-guide/config.md index 5cfc97cc..a0ef289e 100644 --- a/.docs/user-guide/config.md +++ b/.docs/user-guide/config.md @@ -586,8 +586,9 @@ remote storage systems. Currently the following storage drivers are supported: [Isilon](./storage-providers.md#isilon) | isilon [ScaleIO](./storage-providers.md#scaleio) | scaleio [VirtualBox](./storage-providers.md#virtualbox) | virtualbox -[EBS](./storage-providers.md#ebs) | ebs, ec2 -[EFS](./storage-providers.md#efs) | efs +[EBS](./storage-providers.md#aws-ebs) | ebs, ec2 +[EFS](./storage-providers.md#aws-efs) | efs +[RBD](./storage-providers.md#ceph-rbd) | rbd ..more coming| The `libstorage.server.libstorage.storage.driver` property can be used to @@ -694,6 +695,7 @@ ScaleIO|Yes VirtualBox|Yes EBS|Yes EFS|No +RBD|No #### Ignore Used Count By default accounting takes place during operations that are performed diff --git a/.docs/user-guide/storage-providers.md b/.docs/user-guide/storage-providers.md index ffefa197..be875bee 100644 --- a/.docs/user-guide/storage-providers.md +++ b/.docs/user-guide/storage-providers.md @@ -517,3 +517,89 @@ libstorage: region: us-east-1 tag: test ``` +## Ceph RBD +The Ceph RBD driver registers a driver named `rbd` with the `libStorage` driver +manager and is used to connect and mount RADOS Block Devices from a Ceph +cluster. + +### Requirements + +* The `ceph` and `rbd` binary executables must be installed on the host +* The `rbd` kernel module must be installed +* A `ceph.conf` file must be present in its default location + (`/etc/ceph/ceph.conf`) +* The ceph `admin` key must be present in `/etc/ceph/` + +### Configuration +The following is an example with all possible fields configured. For a running +example see the `Examples` section. + +```yaml +rbd: + defaultPool: rbd +``` + +#### Configuration Notes + +* The `defaultPool` parameter is optional, and defaults to "rbd". When set, all + volume requests that do not reference a specific pool will use the + `defaultPool` value as the destination storage pool. + +### Runtime behavior + +The Ceph RBD driver only works when the client and server are on the same node. +There is no way for a centralized `libStorage` server to attach volumes to +clients, therefore the `libStorage` server must be running on each node that +wishes to mount RBD volumes. + +The RBD driver uses the format of `.` for the volume ID. This allows +for the use of multiple pools by the driver. During a volume create, if the +volume ID is given as `.`, a volume named *name* will be created in +the *pool* storage pool. If no pool is referenced, the `defaultPool` will be +used. + +When querying volumes, the driver will return all RBDs present in all pools in +the cluster, prefixing each volume with the appropriate `.` value. + +All RBD creates are done using the default 4MB object size, and using the +"layering" feature bit to ensure greatest compatibility with the kernel clients. + +### Activating the Driver +To activate the Ceph RBD driver please follow the instructions for +[activating storage drivers](./config.md#storage-drivers), using `rbd` as the +driver name. + +### Troubleshooting + +* Make sure that `ceph` and `rbd` commands work without extra parameters for + ID, key, and monitors. All configuration must come from `ceph.conf`. +* Check status of the ceph cluster with `ceph -s` command. + +### Examples + +Below is a full `config.yml` that works with RBD + +```yaml +libstorage: + server: + services: + rbd: + driver: rbd + rbd: + defaultPool: rbd +``` + +### Caveats + +* Snapshot and copy functionality is not yet implemented +* libStorage Server must be running on each host to mount/attach RBD volumes +* There is not yet options for using non-admin cephx keys or changing RBD create + features +* Volume pre-emption is not supported. Ceph does not provide a method to + forcefully detach a volume from a remote host -- only a host can attach and + detach volumes from itself. +* RBD advisory locks are not yet in use. A volume is returned as "unavailable" + if it has a watcher other than the requesting client. Until advisory locks are + in place, it may be possible for a client to attach a volume that is already + attached to another node. Mounting and writing to such a volume could lead to + data corruption.