From a970147c9c6b0fcbb6e78fab58a4512104c6a110 Mon Sep 17 00:00:00 2001 From: Tiago Castro Date: Tue, 17 Dec 2024 01:24:15 +0000 Subject: [PATCH] docs: add build and test docs for all mayastor repos Signed-off-by: Tiago Castro --- doc/build-all.md | 48 +++--- doc/build.md | 6 - doc/contributor.md | 326 +++++++++++++++++++++++++++++++++++++++++ doc/lvm.md | 21 +-- doc/run.md | 28 ++-- doc/test-controller.md | 184 +++++++++++++++++++++++ doc/test-extensions.md | 157 ++++++++++++++++++++ doc/test.md | 8 +- 8 files changed, 722 insertions(+), 56 deletions(-) create mode 100644 doc/contributor.md create mode 100644 doc/test-controller.md create mode 100644 doc/test-extensions.md diff --git a/doc/build-all.md b/doc/build-all.md index fe3912f96..e2242a0c8 100644 --- a/doc/build-all.md +++ b/doc/build-all.md @@ -13,12 +13,12 @@ you won't need to worry about cross compiler toolchains, and all builds are repr ## Table of Contents - [Prerequisites](#prerequisites) - - [Build system](#build-system) - - [Source Code](#source-code) + - [Build system](#build-system) + - [Source Code](#source-code) - [Building and Pushing](#building-and-pushing) - - [Building](#building) - - [Pushing](#pushing) - - [Installing](#installing) + - [Building](#building) + - [Pushing](#pushing) + - [Installing](#installing) ## Prerequisites @@ -53,27 +53,27 @@ Mayastor is split across different GitHub repositories under the [OpenEBS][githu Here's a breakdown of the required repos for the task at hand: - **_data-plane_**: - - The data-plane components: - - io-engine (the only one which we need for this) - - io-engine-client - - casperf + - The data-plane components: + - io-engine (the only one which we need for this) + - io-engine-client + - casperf - **_control-plane_**: - - Various control-plane components: - - agent-core - - agent-ha-cluster - - agent-ha-node - - operator-diskpool - - csi-controller - - csi-node - - api-rest + - Various control-plane components: + - agent-core + - agent-ha-cluster + - agent-ha-node + - operator-diskpool + - csi-controller + - csi-node + - api-rest - **_extensions_**: - - Mostly K8s specific components: - - kubectl-mayastor - - metrics-exporter-io-engine - - call-home - - stats-aggregator - - upgrade-job - - Also contains the helm-chart + - Mostly K8s specific components: + - kubectl-mayastor + - metrics-exporter-io-engine + - call-home + - stats-aggregator + - upgrade-job + - Also contains the helm-chart > **_NOTE_**: > There are also a few other repositories which are pulled or submoduled by the repositories above diff --git a/doc/build.md b/doc/build.md index 14a534944..1bcf17ba0 100644 --- a/doc/build.md +++ b/doc/build.md @@ -217,8 +217,6 @@ you want to run them locally: [nix-install]: https://nixos.org/download.html -[nix-develop]: https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-develop.html - [nix-paper]: https://edolstra.github.io/pubs/nixos-jfp-final.pdf [nix-build]: https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-build.html @@ -227,8 +225,6 @@ you want to run them locally: [nix-shell]: https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-shell.html -[nix-channel]: https://nixos.wiki/wiki/Nix_channels - [nixos]: https://nixos.org/ [rust-lang]: https://www.rust-lang.org/ @@ -242,5 +238,3 @@ you want to run them locally: [reproducible-builds]: https://reproducible-builds.org/ [cii-best-practices]: https://www.coreinfrastructure.org/programs/best-practices-program/ - -[direnv]: https://direnv.net/ diff --git a/doc/contributor.md b/doc/contributor.md new file mode 100644 index 000000000..93a3583ec --- /dev/null +++ b/doc/contributor.md @@ -0,0 +1,326 @@ +# Contributing to Mayastor + +This guide will walk you through the process of building and testing all Mayastor components using Nix and Docker. + +Mayastor is a multi-component [Rust][rust-lang] project that makes heavy use of +[Nix][nix-explore] for our development and build process. + +If you're coming from a non-Rust (or non-Nix) background, **building Mayastor may be a bit +different than you're used to.** There is no `Makefile`, you won't need a build toolchain, +you won't need to worry about cross compiler toolchains, and all builds are reproducible. + +## Table of Contents + +- [Prerequisites](#prerequisites) + - [Build system](#build-system) + - [Test system](#test-system) + - [Source Code](#source-code) +- [Building binaries](#building-binaries) + - [Building local binaries](#building-local-binaries) +- [Testing](#testing) + - [Mayastor I/O Engine (data-plane)](#mayastor-io-engine-data-plane) + - [Mayastor Control Plane](#mayastor-control-plane) + - [Mayastor Extensions](#mayastor-extensions) + - [CI](#ci) + - [Jenkins](#jenkins) + - [GitHub Actions](#github-actions) +- [Deploying to K8s](#deploying-to-k8s) + - [Building the images](#building-the-images) + - [Pushing the images](#pushing-the-images) + - [Iterative Builds](#iterative-builds) + - [Installing](#installing) + +## Prerequisites + +Mayastor **only** builds on modern Linuxes. We'd adore contributions to add support for +Windows, FreeBSD, OpenWRT, or other server platforms. + +If you do not have a Linux system: + +- **Windows:** We recommend using [WSL2][windows-wsl2] if you only need to + build Mayastor. You'll need a [Hyper-V VM][windows-hyperv] if you want to use it. +- **Mac:** We recommend you use [Docker for Mac][docker-install] + and follow the Docker process described. Please let us know if you find a way to + run it! +- **FreeBSD:** We _think_ this might actually work, SPDK is compatible! But, we haven't + tried it yet. +- **Others:** This is kind of a "Do-it-yourself" situation. Sorry, we can't be more help! + +### Build system + +The only thing your system needs to build Mayastor is [**Nix**][nix-install]. + +Usually [Nix][nix-install] can be installed via (Do **not** use `sudo`!): + +```bash +curl -L https://nixos.org/nix/install | sh +``` + +### Test system + +### Source Code + +Mayastor is split across different GitHub repositories under the [OpenEBS][github-openebs] organization. + +Here's a breakdown of the required repos for the task at hand: + +- **_data-plane_**: + - The data-plane components: + - io-engine (the only one which we need for this) + - io-engine-client + - casperf +- **_control-plane_**: + - Various control-plane components: + - agent-core + - agent-ha-cluster + - agent-ha-node + - operator-diskpool + - csi-controller + - csi-node + - api-rest +- **_extensions_**: + - Mostly K8s specific components: + - kubectl-mayastor + - metrics-exporter-io-engine + - call-home + - stats-aggregator + - upgrade-job + - Also contains the helm-chart + +> **_NOTE_**: +> There are also a few other repositories which are pulled or submoduled by the repositories above + +If you want to tinker with all repos, here's how you can check them all out: + +```bash +mkdir ~/mayastor && cd ~/mayastor +git clone --recurse-submodules https://github.com/openebs/mayastor.git -- io-engine +git clone --recurse-submodules https://github.com/openebs/mayastor-control-plane.git -- controller +git clone --recurse-submodules https://github.com/openebs/mayastor-extensions.git -- extensions +``` + +## Building binaries + +### Building local binaries + +Each code repository contains it's own [`nix-shell`][nix-shell] environment and with it all pre-requisite build dependencies. + +> **NOTE** +> To run the tests, you might need additional OS configuration, example: a docker service. + +```bash +cd ~/mayastor/controller +nix-shell +``` + +Once entered, you can start any tooling (eg `code .`) to ensure the correct resources are available. +The project can then be interacted with like any other Rust project. + +Building: + +```bash +cargo build --bins +``` + +## Testing + +There are a few different types of tests used in Mayastor: + +- Unit Tests +- Component Tests +- BDD Tests +- E2E Tests +- Load Tests +- Performance Tests + +Each repo may have a subset of the types defined above. + +### Mayastor I/O Engine (data-plane) + +Find the guide [here](./test.md). + +### Mayastor Control Plane + +Find the guide [here](./test-controller.md). + +### Mayastor Extensions + +Find the guide [here](./test-extensions.md). + +### CI + +Each repo has its own CI system which is currently a mix of [Jenkins][jenkins] and [GitHub Actions][github-actions]. +At its core, each pipeline runs the Unit/Integration tests, the BDD tests and image-build tests, ensuring that a set of images can be built once a PR is merged to the target branch. + +#### Jenkins + +For the Jenkins pipeline you can refer to the `./Jenkinsfile` on each repo. +The Jenkins systems are currently setup on the DataCore sponsored hardware and need to be reinstalled to CNCF sponsored hardware or perhaps even completely moved to GitHub Actions. + +> _**CI**_\ +> Let us know if you'd like to help with this effort + +#### GitHub Actions + +For the GitHub Actions you can refer to the `./github/workflows` on each repo. + +## Deploying to K8s + +When you're mostly done with a set of changes, you'll want to test them in a K8s cluster, and for this you need to build docker images. +Each of the repos contains a script for building and pushing all their respective container images. +Usually this is located at `./scripts/release.sh` +The api for this script is generally the same as it leverages a common [base script][deps-base-release.sh]. + +### Building the images + +```bash +> ./scripts/release.sh --help +Usage: release.sh [OPTIONS] + + -d, --dry-run Output actions that would be taken, but don't run them. + -h, --help Display this text. + --registry Push the built images to the provided registry. + To also replace the image org provide the full repository path, example: docker.io/org + --debug Build debug version of images where possible. + --skip-build Don't perform nix-build. + --skip-publish Don't publish built images. + --image Specify what image to build and/or upload. + --tar Decompress and load images as tar rather than tar.gz. + --skip-images Don't build nor upload any images. + --alias-tag Explicit alias for short commit hash tag. + --tag Explicit tag (overrides the git tag). + --incremental Builds components in two stages allowing for faster rebuilds during development. + --build-bins Builds all the static binaries. + --no-static-linking Don't build the binaries with static linking. + --build-bin Specify which binary to build. + --skip-bins Don't build the static binaries. + --build-binary-out Specify the outlink path for the binaries (otherwise it's the current directory). + --skopeo-copy Don't load containers into host, simply copy them to registry with skopeo. + --skip-cargo-deps Don't prefetch the cargo build dependencies. + +Environment Variables: + RUSTFLAGS Set Rust compiler options when building binaries. + +Examples: + release.sh --registry 127.0.0.1:5000 +``` + +If you want to see what happens under the hood, without building, you can use the `--dry-run`. + +```bash +cd ~/mayastor/controller +./scripts/release.sh --dry-run --alias-tag my-tag +``` + +Here's a snippet of what you'd actually see: + +```text +~/mayastor/controller ~/mayastor +nix-build --argstr img_tag my-tag --no-out-link -A control-plane.project-builder.cargoDeps +Cargo vendored dependencies pre-fetched after 1 attempt(s) +Building openebs/mayastor-agent-core:my-tag ... +nix-build --argstr img_tag my-tag --out-link agents.core-image -A images.release.agents.core --arg allInOne true --arg incremental false --argstr product_prefix --argstr rustFlags +docker load -i agents.core-image +rm agents.core-image +Building openebs/mayastor-agent-ha-node:my-tag ... +nix-build --argstr img_tag my-tag --out-link agents.ha.node-image -A images.release.agents.ha.node --arg allInOne true --arg incremental false --argstr product_prefix --argstr rustFlags +docker load -i agents.ha.node-image +rm agents.ha.node-image +Building openebs/mayastor-agent-ha-cluster:my-tag ... +nix-build --argstr img_tag my-tag --out-link agents.ha.cluster-image -A images.release.agents.ha.cluster --arg allInOne true --arg incremental false --argstr product_prefix --argstr rustFlags +docker load -i agents.ha.cluster-image +``` + +If you want to build, but not push it anywhere, you can skip the publishing with `--skip-publish`. + +> **_NOTE_**: For repos with static binaries, you can avoid building them with `--skip-bins`. + +```bash +cd ~/mayastor/controller +./scripts/release.sh --skip-publish --alias-tag my-tag +``` + +> _**NOTE**: +> Take a look [here](./build-all.md) for the guide building and pushing all images + +### Pushing the images + +You can push the images to your required registry/namespace using the argument `--registry`.\ +For the purposes of this, we'll push my docker.io namespace: `docker.io/tiagolobocastro`. + +```bash +cd ~/mayastor/controller +./scripts/release.sh --registry docker.io/tiagolobocastro --alias-tag my-tag +``` + +> _**NOTE**_: +> If you don't specify the namespace, the default openebs namespace is kept. + +### Iterative Builds + +The default image build process attempts to build all images part of a single repo in one shot, thus reducing the build time. +If you're iterating over code changes on a single image, you may wish to enable the iterative build flag which will not rebuild the dependencies over and over again. + +```bash +cd ~/mayastor/controller +./scripts/release.sh --registry docker.io/tiagolobocastro --alias-tag my-tag --image csi.controller --incremental +``` + +### Installing + +Installing the full helm chart with the custom images is quite simple. + +> _**NOTE**_: +> One last step is required, mostly due to a bug or unexpected behaviour with the helm chart. \ +> We'll need to manually push this container image: +> +>```bash +>docker pull docker.io/openebs/alpine-sh:4.1.0 +>docker tag docker.io/openebs/alpine-sh:4.1.0 docker.io/tiagolobocastro/alpine-sh:4.1.0 +>docker push docker.io/tiagolobocastro/alpine-sh:4.1.0 +>``` + +```bash +> helm install mayastor chart -n mayastor --create-namespace --set="image.repo=tiagolobocastro,image.tag=my-tag" --wait +NAME: mayastor +LAST DEPLOYED: Fri Dec 6 15:42:16 2024 +NAMESPACE: mayastor +STATUS: deployed +REVISION: 1 +NOTES: +OpenEBS Mayastor has been installed. Check its status by running: +$ kubectl get pods -n mayastor + +For more information or to view the documentation, visit our website at https://openebs.io/docs/ +``` + +If you're only building certain components, you may want to modify the images of an existing deployment, or configure per-repo tags, example: + +```bash +helm install mayastor chart -n mayastor --create-namespace --set="image.repo=tiagolobocastro,image.repoTags.control-plane=my-tag" --wait +``` + +> _**NOTE**_: +> We are currently missing overrides for registry/namespace/image:tag on specific Mayastor components + +[rust-lang]: https://www.rust-lang.org/ + +[nix-explore]: https://nixos.org/explore.html + +[nix-shell]: https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-shell.html + +[windows-wsl2]: https://wiki.ubuntu.com/WSL#Ubuntu_on_WSL + +[windows-hyperv]: https://wiki.ubuntu.com/Hyper-V + +[docker-install]: https://docs.docker.com/get-docker/ + +[nix-install]: https://nixos.org/download.html + +[github-openebs]: https://github.com/openebs + +[deps-base-release.sh]: https://github.com/openebs/mayastor-dependencies/blob/HEAD/scripts/release.sh + +[jenkins]: https://www.jenkins.io/ + +[github-actions]: https://docs.github.com/en/actions \ No newline at end of file diff --git a/doc/lvm.md b/doc/lvm.md index 6898816e4..879415c6c 100644 --- a/doc/lvm.md +++ b/doc/lvm.md @@ -10,7 +10,7 @@ and extensive features that can enhance Mayastor’s storage services. ## Motivation -LVM is a mature and widely adopted storage management system in Linux environments. +LVM is a mature and widely adopted storage management system in Linux environments.\ While the SPDK Blobstore (LVS) has been a reliable option, integrating LVM as an alternative backend can captivate a broader audience due to its robustness and maturity, feature set, and community support make it an attractive choice for Mayastor users. @@ -19,13 +19,14 @@ By integrating LVM, we can also allow users to upgrade existing non-replicated L ## Goals -Alternative Backend: Enable Mayastor to use LVM volume groups as an alternative backend for storage -pools. -Dynamic Volume Management: Leverage LVM’s volume management features (resizing, snapshots, -thin provisioning) within Mayastor. -Simplicity: Abstract LVM complexities from users while providing robust storage services. +`Alternative Backend`: Enable Mayastor to use LVM volume groups as an alternative backend for storage +pools.\ +`Dynamic Volume Management`: Leverage LVM’s volume management features (resizing, snapshots, +thin provisioning) within Mayastor.\ +`Simplicity`: Abstract LVM complexities from users while providing robust storage services. ### Supporting Changes + 1. Pools Mayastor pools represent devices supplying persistent backing storage. @@ -69,15 +70,17 @@ Features - [ ] RAIDx ### Limitation + - Thin provisioning and snapshot support is not yet integrated - RAID is not yet integrated ## Conclusion -By integrating LVM with Mayastor, you can leverage the benefits of both technologies. LVM provides dynamic volume management, -while Mayastor abstracts storage complexities, allowing you to focus on your applications. +By integrating LVM with Mayastor, you can leverage the benefits of both technologies.\ +LVM provides dynamic volume management, while Mayastor abstracts storage complexities, allowing you to focus on your applications.\ Happy storage provisioning! πŸš€ +
```mermaid graph TD; @@ -106,4 +109,4 @@ graph TD; subgraph Node2 /dev/sdc --> PV_3 end -``` \ No newline at end of file +``` diff --git a/doc/run.md b/doc/run.md index fc9439cc8..c5237fba1 100644 --- a/doc/run.md +++ b/doc/run.md @@ -70,13 +70,13 @@ In order to use the full feature set of Mayastor, some or all of the following c - A Linux Kernel 5.1+ (with [`io-uring`][io_uring-intro] support) - The following kernel modules loaded: - - `nbd`: Network Block Device support - - `nvmet`: NVMe Target support - - `nvmet_rdma`: NVMe Target (rDMA) support - - `nvme_fabrics`: NVMe over Fabric support - - `nvme_tcp`: NVMe over TCP support - - `nvme_rdma`: NVMe (rDMA) support - - `nvme_loop`: NVMe Loop Device support + - `nbd`: Network Block Device support + - `nvmet`: NVMe Target support + - `nvmet_rdma`: NVMe Target (rDMA) support + - `nvme_fabrics`: NVMe over Fabric support + - `nvme_tcp`: NVMe over TCP support + - `nvme_rdma`: NVMe (rDMA) support + - `nvme_loop`: NVMe Loop Device support To load these on NixOS: @@ -95,7 +95,7 @@ In order to use the full feature set of Mayastor, some or all of the following c - For Asymmetric Namespace Access (ANA) support (early preview), the following kernel build configuration enabled: - - `CONFIG_NVME_MULTIPATH`: enables support for multipath access to NVMe subsystems + - `CONFIG_NVME_MULTIPATH`: enables support for multipath access to NVMe subsystems This is usually already enabled in distributions kernels, at least for RHEL/CentOS 8.2, Ubuntu 20.04 LTS, and SUSE Linux Enterprise 15.2. @@ -109,7 +109,7 @@ In order to use the full feature set of Mayastor, some or all of the following c followed by reloading the `nvme-core` module or rebooting. - To build this on NixOS: + On recent versions of NixOS this is already enabled by default, otherwise you may build it as such: ```nix # /etc/nixos/configuration.nix @@ -283,9 +283,9 @@ Mayastor development. Here are the ones known to not work by default: - [`kind`][kind] - In order to make this one work, you need to add `/run/udev` and `/run/udev` to the kind node hostPath. - Once the node containers are running, you may need to remount `/sys` as rw. - Here is an example: https://github.com/openebs/mayastor-extensions/blob/develop/scripts/k8s/deployer.sh + In order to make this one work, you need to add `/run/udev` and `/run/udev` to the kind node hostPath.\ + Once the node containers are running, you may need to remount `/sys` as rw.\ + Here is an example: ## Running on a real Kubernetes cluster @@ -302,8 +302,6 @@ production Mayastor deployment and operation instructions. [doc-build-building-portable-nix-bundles]: ./build.md#Building-portable-Nix-bundles -[doc-test]: ./test.md - [io_uring-intro]: https://unixism.net/loti/what_is_io_uring.html [hugepages-lwn-one]: https://lwn.net/Articles/374424/ @@ -336,4 +334,4 @@ production Mayastor deployment and operation instructions. [libvirtd]: https://libvirt.org/index.html -[terraform-readme]: ./terraform/readme.adoc +[terraform-readme]: https://github.com/openebs/mayastor-control-plane/tree/HEAD/terraform/cluster/README.adoc diff --git a/doc/test-controller.md b/doc/test-controller.md new file mode 100644 index 000000000..82597f04d --- /dev/null +++ b/doc/test-controller.md @@ -0,0 +1,184 @@ +# Testing Mayastor Control Plane + +In order to test Mayastor, you'll need to be able to [**run Mayastor**][doc-run], +follow that guide for persistent hugepages & kernel module setup. + +Or, for ad-hoc: + +- Ensure at least 512 2MB hugepages. + + ```bash + echo 512 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages + ``` + +- Ensure several kernel modules are installed: + + ```bash + modprobe xfs nvme_fabrics nvme_tcp nvme_rdma + ``` + +- Ensure docker is installed and the service is running (OS specific) + +## Table of Contents + +- [Table of Contents](#table-of-contents) +- [Local Docker Playground](#local-docker-playground) + - [Deploying](#deploying) +- [Running the test suites](#running-the-test-suites) + - [Unit/Integration/Docs](#unitintegrationdocs) + - [BDD](#bdd) + - [Testing with a custom io-engine](#testing-with-a-custom-io-engine) +- [Local K8s Playground](#local-k8s-playground) + - [Example](#example) + +## Local Docker Playground + +The Mayastor integration tests leverage docker in order to create a "cluster" with multiple components running as their own docker container within the same network. +Specifically, the control-plane integration tests make use of the [deployer](https://github.com/openebs/mayastor-control-plane/blob/HEAD/deployer/README.md) which can setup these "clusters" for you, along with a very extensive range of options. + +### Deploying + +Starting a deployer "cluster", is then very simple: + +```console +deployer start -s -i 2 -w 5s +[/core] [10.1.0.3] /home/tiago/git/mayastor/controller/target/debug/core --store etcd.cluster:2379 +[/etcd] [10.1.0.2] /nix/store/7fvflmxl9a8hfznsc1sddp5az1gjlavf-etcd-3.5.13/bin/etcd --data-dir /tmp/etcd-data --advertise-client-urls http://[::]:2379 --listen-client-urls http://[::]:2379 --heartbeat-interval=1 --election-timeout=5 +[/io-engine-1] [10.1.0.5] /bin/io-engine -N io-engine-1 -g 10.1.0.5:10124 -R https://core:50051 --api-versions V1 -r /host/tmp/io-engine-1.sock --ptpl-dir /host/tmp/ptpl/io-engine-1 -p etcd.cluster:2379 +[/io-engine-2] [10.1.0.6] /bin/io-engine -N io-engine-2 -g 10.1.0.6:10124 -R https://core:50051 --api-versions V1 -r /host/tmp/io-engine-2.sock --ptpl-dir /host/tmp/ptpl/io-engine-2 -p etcd.cluster:2379 +[/rest] [10.1.0.4] /home/tiago/git/mayastor/controller/target/debug/rest --dummy-certificates --https rest:8080 --http rest:8081 --workers=1 --no-auth +``` + +> **NOTE**: Use `--io-engine-isolate` to given each engine a different cpu core +> **NOTE**: Use `--developer-delayed` for sleep delay on each engine, reducing cpu usage +> **NOTE**: For all options, check `deployer start --help` + +And with this we have a dual io-engine cluster which we can interact with. + +```console +rest-plugin get nodes + ID GRPC ENDPOINT STATUS VERSION + io-engine-2 10.1.0.6:10124 Online v1.0.0-997-g17488f4a7da3 + io-engine-1 10.1.0.5:10124 Online v1.0.0-997-g17488f4a7da3 +``` + +You can also use the swagger-ui available on the [localhost:8081](http://localhost:8081/v0/swagger-ui#). + +At the end of your experiment, remember to bring down the cluster: + +```bash +deployer stop +``` + +## Running the test suites + +> **TODO:** We're still writing this! Sorry! Let us know if you want us to prioritize this! + +### Unit/Integration/Docs + +Mayastor's unit tests, integration tests, and documentation tests via the conventional `cargo test`. + +> **An important note**: Some tests need to run as root, and so invoke sudo. + +> **Remember to enter the nix-shell before running any of the commands herein** + +All tests share a deployer "cluster" and network and therefore this means they need to run one at a time. +Example, testing the `deployer-cluster` crate: + +```bash +cargo test -p deployer-cluster -- --test-threads 1 --nocapture +``` + +To test all crates, simply use the provided script: + +```bash +./scripts/rust/test.sh +``` + +### BDD + +There is a bit of extra setup for the python virtual environment. + +To prepare: + +```bash +tests/bdd/setup.sh +``` + +Then, to run the tests: + +```bash +./scripts/python/test.sh +``` + +If you want to run the tests manually, you can also do the following: + +```bash +. tests/bdd/setup.sh # source the virtual environment +pytest tests/bdd/features/csi/node/test_parameters.py -x +``` + +### Testing with a custom io-engine + +You can test with a custom io-engine by specifying environment variables: + +- binary + + ```bash + unset IO_ENGINE_BIN + export IO_ENGINE_IMAGE=docker.io/tiagolobocastro/mayastor-io-engine:my-tag + ``` + +- image + + ```bash + unset IO_ENGINE_IMAGE + export IO_ENGINE_BIN=~/mayastor/io-engine/target/debug/io-engine + ``` + +## Local K8s Playground + +If you need a K8s cluster, we have a [terraform] deployment available [here](https://github.com/openebs/mayastor-control-plane/tree/HEAD/terraform/cluster). +It can be used to deploy K8s on [libvirt] and [lxd]. +> [!Warning] +> Please note that deployment on [lxd] is very experimental at the moment.\ +> See for example: +> + +> **TODO:** We're still writing this! Sorry! Let us know if you want us to prioritize this!\ +> In the meantime, refer to the [README](https://github.com/openebs/mayastor-control-plane/tree/HEAD/terraform/cluster/README.adoc) for more help + +### Example + +```console +❯ terraform apply --var="worker_vcpu=4" --var="worker_memory=8192" --var="worker_nodes=3" --auto-approve +... +Apply complete! Resources: 25 added, 0 changed, 0 destroyed. + +Outputs: + +kluster = < [!Warning] _**Limitation**_\ +> Kind deploys K8s nodes as docker containers on the same host, and thus sharing the same host's kernel +> Currently this means the HA feature becomes a little confusing as multiple nodes may start reporting path failures + +A [helper script](https://github.com/openebs/mayastor-extensions/blob/HEAD/scripts/k8s/deployer.sh) is given allowing to even more easily deploy these clusters and pre-configure them for Mayastor. + +> [!Warning] Kernel Modules\ +> This script will attempt to install kernel modules + +### Deploying + +Starting a kind cluster, is then very simple: + +```console +❯ ./scripts/k8s/deployer.sh start --workers 2 --label --disk 1G +Current hugepages (4096) are sufficient +nvme-tcp kernel module already installed +NVMe multipath support IS enabled +Creating cluster "kind" ... + βœ“ Ensuring node image (kindest/node:v1.30.0) πŸ–Ό + βœ“ Preparing nodes πŸ“¦ πŸ“¦ πŸ“¦ + βœ“ Writing configuration πŸ“œ + βœ“ Starting control-plane πŸ•Ή + βœ“ Installing CNI πŸ”Œ + βœ“ Installing StorageClass πŸ’Ύ + βœ“ Joining worker nodes 🚜 +Set kubectl context to "kind-kind" +You can now use your cluster with: + +kubectl cluster-info --context kind-kind + +Thanks for using kind! 😊 +Kubernetes control plane is running at https://127.0.0.1:45493 +CoreDNS is running at https://127.0.0.1:45493/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy + +To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. +HostIP: "172.18.0.1" +``` + +> **NOTE**:\ +> Use `--disk` to specify the fallocated file size which can be used to create pools on.\ +> Each disk is mounted on `/var/local/mayastor/io-engine/disk.io` on each worker. + +And with this we have a dual worker node cluster which we can interact with. + +```console +❯ kubectl get nodes +NAME STATUS ROLES AGE VERSION +kind-control-plane Ready control-plane 15m v1.30.0 +kind-worker Ready 14m v1.30.0 +kind-worker2 Ready 14m v1.30.0 +``` + +We also provide a [simple script](https://github.com/openebs/mayastor-extensions/blob/HEAD/scripts/helm/install.sh) to deploy a non-production the mayastor helm chart testing: + +```console +❯ ./scripts/helm/install.sh --wait +Installing Mayastor Chart ++ helm install mayastor ./scripts/helm/../../chart -n mayastor --create-namespace --set=etcd.livenessProbe.initialDelaySeconds=5,etcd.readinessProbe.initialDelaySeconds=5,etcd.replicaCount=1 --set=obs.callhome.enabled=true,obs.callhome.sendReport=false,localpv-provisioner.analytics.enabled=false --set=eventing.enabled=false --wait --timeout 5m +NAME: mayastor +LAST DEPLOYED: Tue Dec 17 10:18:27 2024 +NAMESPACE: mayastor +STATUS: deployed +REVISION: 1 +NOTES: +OpenEBS Mayastor has been installed. Check its status by running: +$ kubectl get pods -n mayastor + +For more information or to view the documentation, visit our website at https://openebs.io/docs/. ++ set +x +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +mayastor-agent-core-6bf75fc6f8-pclc2 2/2 Running 0 3m10s 10.244.2.5 kind-worker +mayastor-agent-ha-node-46jkk 1/1 Running 0 3m10s 172.18.0.2 kind-worker +mayastor-agent-ha-node-ljbfj 1/1 Running 0 3m10s 172.18.0.3 kind-worker2 +mayastor-api-rest-7b4b575765-2lvqv 1/1 Running 0 3m10s 10.244.2.2 kind-worker +mayastor-csi-controller-66b784d69f-zzl6z 6/6 Running 0 3m10s 172.18.0.3 kind-worker2 +mayastor-csi-node-flbdg 2/2 Running 0 3m10s 172.18.0.3 kind-worker2 +mayastor-csi-node-tqqc9 2/2 Running 0 3m10s 172.18.0.2 kind-worker +mayastor-etcd-0 1/1 Running 0 3m10s 10.244.1.5 kind-worker2 +mayastor-io-engine-6jlzq 0/2 PodInitializing 0 3m10s 172.18.0.2 kind-worker +mayastor-io-engine-9vmsd 2/2 Running 0 3m10s 172.18.0.3 kind-worker2 +mayastor-localpv-provisioner-56dbcc9fb8-w7csf 1/1 Running 0 3m10s 10.244.1.3 kind-worker2 +mayastor-loki-0 1/1 Running 0 3m10s 10.244.2.8 kind-worker +mayastor-obs-callhome-69c9c454f7-d6wqr 1/1 Running 0 3m10s 10.244.2.3 kind-worker +mayastor-operator-diskpool-7458c66b8-7s4z2 1/1 Running 0 3m10s 10.244.2.4 kind-worker +mayastor-promtail-2jq85 1/1 Running 0 3m10s 10.244.1.2 kind-worker2 +mayastor-promtail-9hzqt 1/1 Running 0 3m10s 10.244.2.6 kind-worker +``` + +Now, you can list the io-engine nodes for example: + +```console +❯ kubectl-mayastor get nodes + ID GRPC ENDPOINT STATUS VERSION + kind-worker2 172.18.0.3:10124 Online v1.0.0-997-g17488f4a7da3 + kind-worker 172.18.0.2:10124 Online v1.0.0-997-g17488f4a7da3 +``` + +At the end of your experiment, remember to bring down the cluster: + +```console +❯ ./scripts/k8s/deployer.sh stop +Deleting cluster "kind" ... +Deleted nodes: ["kind-control-plane" "kind-worker2" "kind-worker"] +``` + +## Running the test suites + +> [!Warning] _**Tests**_\ +> Sadly, this repo is lacking in tests, any help here would be greatly welcomed! + +### Unit/Integration/Docs + +Mayastor's unit tests, integration tests, and documentation tests via the conventional `cargo test`. + +> **Remember to enter the nix-shell before running any of the commands herein** + +To test all crates, simply use the provided script: + +```bash +./scripts/rust/test.sh +``` + +[doc-run]: ./run.md diff --git a/doc/test.md b/doc/test.md index ce5861914..0f072765e 100644 --- a/doc/test.md +++ b/doc/test.md @@ -1,4 +1,4 @@ -# Testing Mayastor +# Testing Mayastor I/O Engine In order to test Mayastor, you'll need to be able to [**run Mayastor**][doc-run], follow that guide for persistent hugepages & kernel module setup. @@ -32,10 +32,14 @@ Mayastor uses [spdk][spdk] which is quite sensitive to threading. This means tes ```bash cd io-engine -RUST_LOG=TRACE cargo test -- --test-threads 1 --nocapture +RUST_LOG=TRACE cargo test --features=io-engine-testing -- --test-threads 1 --nocapture ``` +> _**NOTE**_: +> The flag --features=io-engine-testing ensures you run tests with features enabled only for testing purposes + ## Testing your own SPDK version + To test your custom SPDK version please refere to the [spdk-rs documentation](https://github.com/openebs/spdk-rs/blob/develop/README.md#custom-spdk) ## Using PCIe NVMe devices in cargo tests while developing