Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add basic info to README about the plugin compatibility and features #27

Merged
merged 2 commits into from
Mar 25, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 43 additions & 51 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,87 +1,79 @@
# CSI NFS driver

## Kubernetes
### Requirements
## Overview

The folllowing feature gates and runtime config have to be enabled to deploy the driver
This is a repository for [NFS](https://en.wikipedia.org/wiki/Network_File_System) [CSI](https://kubernetes-csi.github.io/docs/) Driver.
wozniakjan marked this conversation as resolved.
Show resolved Hide resolved
Currently it implements bare minimum of the [CSI spec](https://github.com/container-storage-interface/spec) and is in the alpha state
of the development.

```
FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true
RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true"
```
#### CSI Feature matrix

Mountprogpation requries support for privileged containers. So, make sure privileged containers are enabled in the cluster.
| **nfs.csi.k8s.io** | K8s version compatibility | CSI versions compatibility | Dynamic Provisioning | Resize | Snapshots | Raw Block | AccessModes | Status |
|--------------------|---------------------------|----------------------------|----------------------|--------|-----------|-----------|--------------------------|------------------------------------------------------------------------------|
|master | 1.14 + | v1.0 + | no | no | no | no | Read/Write Multiple Pods | Alpha |
|v2.0.0 | 1.14 + | v1.0 + | no | no | no | no | Read/Write Multiple Pods | Alpha |
|v1.0.0 | 1.9 - 1.15 | v1.0 | no | no | no | no | Read/Write Multiple Pods | [deprecated](https://github.com/kubernetes-csi/drivers/tree/master/pkg/nfs) |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is the max 1.15?

Copy link
Member Author

@wozniakjan wozniakjan Mar 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


### Example local-up-cluster.sh
## Requirements

```ALLOW_PRIVILEGED=true FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true" LOG_LEVEL=5 hack/local-up-cluster.sh```
The CSI NFS driver requires Kubernetes cluster of version 1.14 or newer and
preexisting NFS server, whether it is deployed on cluster or provisioned
independently. The plugin itself provides only a communication layer between
resources in the cluser and the NFS server.

### Deploy
## Example

```kubectl -f deploy/kubernetes create```
There are multiple ways to create a kubernetes cluster, the NFS CSI plugin
should work invariantly of your cluster setup. Very simple way of getting
a local environment for testing can be achieved using for example
[kind](https://github.com/kubernetes-sigs/kind).

### Example Nginx application
Please update the NFS Server & share information in nginx.yaml file.
There are also multiple different NFS servers you can use for testing of
the plugin, the major versions of the protocol v2, v3 and v4 should be supported
by the current implementation.

```kubectl -f examples/kubernetes/nginx.yaml create```
The example assumes you have your cluster created (e.g. `kind create cluster`)
and working NFS server (e.g. https://github.com/rootfs/nfs-ganesha-docker)

## Using CSC tool
#### Deploy

### Build nfsplugin
Deploy the NFS plugin along with the `CSIDriver` info.
```
$ make nfs
kubectl -f deploy/kubernetes create
```

### Start NFS driver
```
$ sudo ./_output/nfsplugin --endpoint tcp://127.0.0.1:10000 --nodeid CSINode -v=5
```
#### Example Nginx application

## Test
Get ```csc``` tool from https://github.com/rexray/gocsi/tree/master/csc
The [/examples/kubernetes/nginx.yaml](/examples/kubernetes/nginx.yaml) contains a `PersistentVolume`,
`PersistentVolumeClaim` and an nginx `Pod` mounting the NFS volume under `/var/www`.

#### Get plugin info
```
$ csc identity plugin-info --endpoint tcp://127.0.0.1:10000
"NFS" "0.1.0"
```
You will need to update the NFS Server IP and the share information under
`volumeAttributes` inside `PersistentVolume` in `nginx.yaml` file to match your
NFS server public end point and configuration. You can also provide additional
`mountOptions`, such as protocol version, in the `PersistentVolume` `spec`
relevant for your NFS Server.

#### NodePublish a volume
```
$ export NFS_SERVER="Your Server IP (Ex: 10.10.10.10)"
$ export NFS_SHARE="Your NFS share"
$ csc node publish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nfs --attrib server=$NFS_SERVER --attrib share=$NFS_SHARE nfstestvol
nfstestvol
kubectl -f examples/kubernetes/nginx.yaml create
```

#### NodeUnpublish a volume
```
$ csc node unpublish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nfs nfstestvol
nfstestvol
```

#### Get NodeID
```
$ csc node get-id --endpoint tcp://127.0.0.1:10000
CSINode
```
## Running Kubernetes End To End tests on an NFS Driver

First, stand up a local cluster `ALLOW_PRIVILEGED=1 hack/local-up-cluster.sh` (from your Kubernetes repo)
For Fedora/RHEL clusters, the following might be required:
```
sudo chown -R $USER:$USER /var/run/kubernetes/
sudo chown -R $USER:$USER /var/lib/kubelet
sudo chcon -R -t svirt_sandbox_file_t /var/lib/kubelet
```
```
sudo chown -R $USER:$USER /var/run/kubernetes/
sudo chown -R $USER:$USER /var/lib/kubelet
sudo chcon -R -t svirt_sandbox_file_t /var/lib/kubelet
```
If you are plannig to test using your own private image, you could either install your nfs driver using your own set of YAML files, or edit the existing YAML files to use that private image.

When using the [existing set of YAML files](https://github.com/kubernetes-csi/csi-driver-nfs/tree/master/deploy/kubernetes), you would edit the [csi-attacher-nfsplugin.yaml](https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/kubernetes/csi-attacher-nfsplugin.yaml#L46) and [csi-nodeplugin-nfsplugin.yaml](https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/kubernetes/csi-nodeplugin-nfsplugin.yaml#L45) files to include your private image instead of the default one. After editing these files, skip to step 3 of the following steps.

If you already have a driver installed, skip to step 4 of the following steps.

1) Build the nfs driver by running `make`
2) Create NFS Driver Image, where the image tag would be whatever that is required by your YAML deployment files `docker build -t quay.io/k8scsi/nfsplugin:v1.0.0 .`
2) Create NFS Driver Image, where the image tag would be whatever that is required by your YAML deployment files `docker build -t quay.io/k8scsi/nfsplugin:v2.0.0 .`
3) Install the Driver: `kubectl create -f deploy/kubernetes`
4) Build E2E test binary: `make build-tests`
5) Run E2E Tests using the following command: `./bin/tests --ginkgo.v --ginkgo.progress --kubeconfig=/var/run/kubernetes/admin.kubeconfig`
Expand Down
2 changes: 1 addition & 1 deletion deploy/kubernetes/csi-nodeplugin-nfsplugin.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ spec:
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: quay.io/k8scsi/nfsplugin:v1.0.0
image: quay.io/k8scsi/nfsplugin:v2.0.0
args :
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
Expand Down
2 changes: 1 addition & 1 deletion pkg/nfs/nfs.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ const (
)

var (
version = "1.0.0"
version = "2.0.0"
)

func NewNFSdriver(nodeID, endpoint string) *nfsDriver {
Expand Down