Skip to content

Commit

Permalink
Add docs for RBD driver
Browse files Browse the repository at this point in the history
  • Loading branch information
codenrhoden committed Dec 13, 2016
1 parent 73295b8 commit a6ec186
Show file tree
Hide file tree
Showing 2 changed files with 90 additions and 2 deletions.
6 changes: 4 additions & 2 deletions .docs/user-guide/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -586,8 +586,9 @@ remote storage systems. Currently the following storage drivers are supported:
[Isilon](./storage-providers.md#isilon) | isilon
[ScaleIO](./storage-providers.md#scaleio) | scaleio
[VirtualBox](./storage-providers.md#virtualbox) | virtualbox
[EBS](./storage-providers.md#ebs) | ebs, ec2
[EFS](./storage-providers.md#efs) | efs
[EBS](./storage-providers.md#aws-ebs) | ebs, ec2
[EFS](./storage-providers.md#aws-efs) | efs
[RBD](./storage-providers.md#ceph-rbd) | rbd
..more coming|

The `libstorage.server.libstorage.storage.driver` property can be used to
Expand Down Expand Up @@ -694,6 +695,7 @@ ScaleIO|Yes
VirtualBox|Yes
EBS|Yes
EFS|No
RBD|No

#### Ignore Used Count
By default accounting takes place during operations that are performed
Expand Down
86 changes: 86 additions & 0 deletions .docs/user-guide/storage-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -517,3 +517,89 @@ libstorage:
region: us-east-1
tag: test
```
## Ceph RBD
The Ceph RBD driver registers a driver named `rbd` with the `libStorage` driver
manager and is used to connect and mount RADOS Block Devices from a Ceph
cluster.

### Requirements

* The `ceph` and `rbd` binary executables must be installed on the host
* The `rbd` kernel module must be installed
* A `ceph.conf` file must be present in its default location
(`/etc/ceph/ceph.conf`)
* The ceph `admin` key must be present in `/etc/ceph/`

### Configuration
The following is an example with all possible fields configured. For a running
example see the `Examples` section.

```yaml
rbd:
defaultPool: rbd
```

#### Configuration Notes

* The `defaultPool` parameter is optional, and defaults to "rbd". When set, all
volume requests that do not reference a specific pool will use the
`defaultPool` value as the destination storage pool.

### Runtime behavior

The Ceph RBD driver only works when the client and server are on the same node.
There is no way for a centralized `libStorage` server to attach volumes to
clients, therefore the `libStorage` server must be running on each node that
wishes to mount RBD volumes.

The RBD driver uses the format of `<pool>.<name>` for the volume ID. This allows
for the use of multiple pools by the driver. During a volume create, if the
volume ID is given as `<pool>.<name>`, a volume named *name* will be created in
the *pool* storage pool. If no pool is referenced, the `defaultPool` will be
used.

When querying volumes, the driver will return all RBDs present in all pools in
the cluster, prefixing each volume with the appropriate `<pool>.` value.

All RBD creates are done using the default 4MB object size, and using the
"layering" feature bit to ensure greatest compatibility with the kernel clients.

### Activating the Driver
To activate the Ceph RBD driver please follow the instructions for
[activating storage drivers](./config.md#storage-drivers), using `rbd` as the
driver name.

### Troubleshooting

* Make sure that `ceph` and `rbd` commands work without extra parameters for
ID, key, and monitors. All configuration must come from `ceph.conf`.
* Check status of the ceph cluster with `ceph -s` command.

### Examples

Below is a full `config.yml` that works with RBD

```yaml
libstorage:
server:
services:
rbd:
driver: rbd
rbd:
defaultPool: rbd
```

### Caveats

* Snapshot and copy functionality is not yet implemented
* libStorage Server must be running on each host to mount/attach RBD volumes
* There is not yet options for using non-admin cephx keys or changing RBD create
features
* Volume pre-emption is not supported. Ceph does not provide a method to
forcefully detach a volume from a remote host -- only a host can attach and
detach volumes from itself.
* RBD advisory locks are not yet in use. A volume is returned as "unavailable"
if it has a watcher other than the requesting client. Until advisory locks are
in place, it may be possible for a client to attach a volume that is already
attached to another node. Mounting and writing to such a volume could lead to
data corruption.

0 comments on commit a6ec186

Please sign in to comment.