diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml new file mode 100644 index 00000000..59760481 --- /dev/null +++ b/.github/workflows/publish.yml @@ -0,0 +1,18 @@ +name: publish +on: + push: + branches: + - main + workflow_dispatch: +jobs: + deploy: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + with: + submodules: recursive + - uses: actions/setup-python@v4 + with: + python-version: 3.x + - run: pip install mkdocs-material + - run: mkdocs gh-deploy --force diff --git a/.gitignore b/.gitignore index 4f12a03f..1a3de280 100644 --- a/.gitignore +++ b/.gitignore @@ -2,3 +2,7 @@ scratch env *.pcap *.log +venv +site +.DS_Store +*.bkp diff --git a/deployments/k8s/network-emulation/ixia-c-k8s.drawio.svg b/deployments/k8s/network-emulation/ixia-c-k8s.drawio.svg index 78227e5e..2dbdd174 100644 --- a/deployments/k8s/network-emulation/ixia-c-k8s.drawio.svg +++ b/deployments/k8s/network-emulation/ixia-c-k8s.drawio.svg @@ -1,4 +1,4 @@ - + @@ -27,7 +27,7 @@ - + @@ -50,7 +50,7 @@ - + @@ -76,7 +76,7 @@ - + @@ -84,9 +84,13 @@
+ + + keng-controller + + - ixia-c-controller

:8443     https @@ -100,7 +104,7 @@
- ixia-c-controller... + keng-controller... @@ -131,7 +135,7 @@ - + @@ -154,7 +158,7 @@ - + @@ -183,7 +187,7 @@ - + @@ -191,7 +195,7 @@ - + @@ -220,10 +224,10 @@ - + - + @@ -246,7 +250,7 @@ - + @@ -271,7 +275,7 @@ - + @@ -295,7 +299,7 @@ - + @@ -324,11 +328,11 @@ - + - + @@ -357,7 +361,7 @@ - + @@ -503,9 +507,9 @@ - + - + @@ -531,9 +535,9 @@ - + - + diff --git a/deployments/k8s/network-emulation/pods.yaml b/deployments/k8s/network-emulation/pods.yaml index 2e8a9f8e..311ef55c 100644 --- a/deployments/k8s/network-emulation/pods.yaml +++ b/deployments/k8s/network-emulation/pods.yaml @@ -133,4 +133,3 @@ spec: - "sleep" - "infinity" restartPolicy: Always - \ No newline at end of file diff --git a/deployments/k8s/network-emulation/readme.md b/deployments/k8s/network-emulation/readme.md index fa24e482..a5cce957 100644 --- a/deployments/k8s/network-emulation/readme.md +++ b/deployments/k8s/network-emulation/readme.md @@ -70,7 +70,7 @@ To achieve this, we'll be using [Meshnet CNI](https://github.com/networkop/meshn docker pull ghcr.io/open-traffic-generator/keng-controller:0.1.0-53 docker pull ghcr.io/open-traffic-generator/ixia-c-traffic-engine:1.6.0.85 docker pull ghcr.io/open-traffic-generator/ixia-c-protocol-engine:1.00.0.337 - + # download DUT image docker pull ubuntu:22.04 @@ -133,7 +133,7 @@ To achieve this, we'll be using [Meshnet CNI](https://github.com/networkop/meshn 7. Run IPv4 forwarding test using [snappi](https://github.com/open-traffic-generator/snappi/tree/main/gosnappi) which is an auto-generated SDK based on [Open Traffic Generator API](https://redocly.github.io/redoc/?url=https://raw.githubusercontent.com/open-traffic-generator/models/master/artifacts/openapi.yaml&nocors) - The test parameters, e.g. location of Ixia-C controller, name of interfaces, etc. can be modified inside `testConst` map in `ipfwd.go`. + The test parameters, e.g. location of KENG controller, name of interfaces, etc. can be modified inside `testConst` map in `ipfwd.go`. Check the file for more details on the test. diff --git a/deployments/k8s/readme.md b/deployments/k8s/readme.md index c74801bd..28e0d424 100644 --- a/deployments/k8s/readme.md +++ b/deployments/k8s/readme.md @@ -51,23 +51,23 @@ This section hosts [kustomize](https://kustomize.io/) manifests for deploying va ### Deploy Topology and Run Tests (Stateless traffic on eth0) -1. Deploy topology consisting of two ixia-c port pods and one ixia-c controller pod +1. Deploy topology consisting of two Ixia-c port pods and one KENG controller pod Topology manifests are kept inside `overlays/two-traffic-ports-eth0` which specifies `port1` and `port2`. * iptables rule is configured on both ports to drop UDP/TCP packets destined for ports 7000-8000 * number of ports can be increased by adding new port dirs similar to `port1` and using it in rest of the files ```bash - # deploy ixia-c with two ports that only support stateless traffic over eth0 + # deploy Ixia-c with two ports that only support stateless traffic over eth0 kubectl apply -k overlays/two-traffic-ports-eth0 - # ensure all ixia-c pods are ready + # ensure all Ixia-c pods are ready kubectl wait --for=condition=Ready pods --all -n ixia-c ``` 2. Generate test pre-requisites The sample test requires `conformance/test-config.yaml` which is auto-generated: - * ixia-c controller / port endpoints + * KENG controller / port endpoints * common port / flow properties * values like port pod IPs, gateway MAC on tx port, etc. diff --git a/deployments/raw-one-arm.yml b/deployments/raw-one-arm.yml index e3f6187e..789f60f9 100644 --- a/deployments/raw-one-arm.yml +++ b/deployments/raw-one-arm.yml @@ -16,6 +16,6 @@ services: - ARG_IFACE_LIST=virtual@af_packet,${IFC1} - OPT_NO_HUGEPAGES=Yes aur: - image: ghcr.io/open-traffic-generator/ixia-c-app-usage-reporter:${AUR_VERSION:-latest} + image: ghcr.io/open-traffic-generator/keng-app-usage-reporter:${AUR_VERSION:-latest} network_mode: "host" restart: always diff --git a/deployments/raw-three-arm-mesh.yml b/deployments/raw-three-arm-mesh.yml index e0852234..893f9e56 100644 --- a/deployments/raw-three-arm-mesh.yml +++ b/deployments/raw-three-arm-mesh.yml @@ -39,6 +39,6 @@ services: - ARG_IFACE_LIST=virtual@af_packet,eth0 - OPT_NO_HUGEPAGES=Yes aur: - image: ghcr.io/open-traffic-generator/ixia-c-app-usage-reporter:${AUR_VERSION:-latest} + image: ghcr.io/open-traffic-generator/keng-app-usage-reporter:${AUR_VERSION:-latest} network_mode: "host" restart: always diff --git a/deployments/raw-two-arm.yml b/deployments/raw-two-arm.yml index 1c12f763..8f45843c 100644 --- a/deployments/raw-two-arm.yml +++ b/deployments/raw-two-arm.yml @@ -26,6 +26,6 @@ services: - ARG_IFACE_LIST=virtual@af_packet,${IFC2} - OPT_NO_HUGEPAGES=Yes aur: - image: ghcr.io/open-traffic-generator/ixia-c-app-usage-reporter:${AUR_VERSION:-latest} + image: ghcr.io/open-traffic-generator/keng-app-usage-reporter:${AUR_VERSION:-latest} network_mode: "host" restart: always diff --git a/docs/CNAME b/docs/CNAME new file mode 100644 index 00000000..9095b69b --- /dev/null +++ b/docs/CNAME @@ -0,0 +1 @@ +ixia-c.dev \ No newline at end of file diff --git a/docs/assets/favicon.png b/docs/assets/favicon.png new file mode 100644 index 00000000..0d2490eb Binary files /dev/null and b/docs/assets/favicon.png differ diff --git a/docs/assets/keng-diagram.png b/docs/assets/keng-diagram.png new file mode 100644 index 00000000..ee52fde3 Binary files /dev/null and b/docs/assets/keng-diagram.png differ diff --git a/docs/assets/logo.png b/docs/assets/logo.png new file mode 100644 index 00000000..1f34f181 Binary files /dev/null and b/docs/assets/logo.png differ diff --git a/docs/deployments-containerlab.md b/docs/deployments-containerlab.md new file mode 100644 index 00000000..41eb220a --- /dev/null +++ b/docs/deployments-containerlab.md @@ -0,0 +1,42 @@ + +# Deploy Ixia-c-one using containerlab + +Ixia-c-one is deployed as single-container application by using [containerlab](https://containerlab.dev/quickstart/) that consists of the following services: + +* **containerlab**: Containerlab provides a CLI for orchestrating and managing container-based networking labs. It starts the containers, builds a virtual wiring between them to create lab topologies depending on a user's choice, and manages the labs lifecycle. +* **Ixia-c-one**: Keysight Ixia-c-one is a single-container distribution of Ixia-c, which in turn is Keysight's reference implementation of Open Traffic Generator API. + + Meet the [keysight_ixia-c-one](https://containerlab.dev/manual/kinds/keysight_ixia-c-one) kind! It is available from containerlab [release 0.26](https://containerlab.dev/rn/0.26/#keysight-ixia-c). +* **srl linux**: Nokia SR Linux is a truly open network operating system (NOS), that makes your data center switching infrastructure more scalable, more flexible, and simpler to operate. + +![ixia-c-one](res/ixia-c-one-aur.drawio.svg) + +## Install containerlab + + ```sh + # download and install the latest release (may require sudo) + bash -c "$(curl -sL https://get.containerlab.dev)" + ``` + +## Deploy the topology + +* You can find a sample topology definition in , which consists of Nokia SR Linux and Ixia-c-one nodes that are connected to one-another. +* This consists of a Keysight ixia-c-one node with 2 ports connected to 2 ports on an srl linux node via two point-to-point ethernet links. Both nodes are also connected with their management interfaces to the containerlab docker network. + + ```sh + # After downloading the sample topology file + containerlab deploy --topo ixiac01.clab.yml + ``` + +- After deployment, you are now ready to run a test using the topology. + +## Run a test + +* Follow this [link](https://containerlab.dev/lab-examples/ixiacone-srl/#execution) to run a test. + +## Destroy/Remove the topology + + ```sh + # delete a particular topology + containerlab destroy --topo ixiac01.clab.yml + ``` diff --git a/docs/deployments-docker-compose.md b/docs/deployments-docker-compose.md new file mode 100644 index 00000000..e37db43b --- /dev/null +++ b/docs/deployments-docker-compose.md @@ -0,0 +1,77 @@ +# Deploy Ixia-c using docker-compose + +Deploying multiple services manually (along with the required parameters) is not always applicable in some scenarios. For convenience, the [deployments](../deployments) directory consists of the following `docker-compose` files: + +- `*.yml`: Describes the services for a given scenario and the deployment parameters that are required to start them. +- `.env`: Holds the default parameters, that are used across all `*.yml` files. For example, the name of the interface, the version of docker images, and etc. + +If a concerned `.yml` file does not include certain variables from `.env`, those can then safely be ignored. +Follwoing is the example of a usual workflow, by using `docker-compose`. + +```sh +# change default parameters if needed; e.g. interface name, image version, etc. +vi deployments/.env +# deploy and start services for community users +docker-compose -f deployments/.yml up -d +# stop and remove services deployed +docker-compose -f deployments/.yml down +``` + +On most of the systems, `docker-compose` needs to be installed separately even if the docker is already installed. For more information, see [docker prerequisites](prerequisites.md#docker) . + +>All the scenarios that are mentioned in the following sections, describe both manual and automated (requiring docker-compose) steps. + +## Deployment Parameters + +### Controller + + | Controller Parameters | Optional | Default | Description | + |-----------------------------|-----------|-------------------------|-----------------------------------------------------------------| + | --debug | Yes | false | Enables high volume logs with debug info for better diagnostics.| + | --disable-app-usage-reporter| Yes | false | Disables sending of usage data to the app-usage-reporter. | + | --http-port | Yes | 8443 | TCP port for HTTP server. | + | --aur-host | Yes | https://localhost:5600 | Overrides the location of the app-usage-reporter. | + | --accept-eula | No | NA | Indicates that the user has accepted EULA, otherwise the controller will not boot up. | + | --license-servers | No | NA | Indicates the ip address of license servers for commercial users. | + + Docker Parameters: + +- `--net=host`: It is recommended to allow the use of the host network stack, in order to address the traffic-engine containers using `localhost` instead of `container-ip`, when deployed on the same host. +- `-d`: This starts the container in background. + + Example: + + ```bash + # For community users + docker run --net=host -d ghcr.io/open-traffic-generator/keng-controller --accept-eula --debug --http-port 5050 + + # For commercial users + docker run --net=host -d ghcr.io/open-traffic-generator/keng-controller --accept-eula --debug --http-port 5050 --license-servers="ip/hostname of license server" + ``` + +### Traffic Engine + + | Environment Variables | Optional | Default | Description | + |-----------------------------|-----------|-------------------------|-----------------------------------------------------------------| + | ARG_IFACE_LIST | No | NA | Name of the network interface to bind to. It must be visible to the traffic-engine's network namespace. For example, `virtual@af_packet,eth1` where `eth1` is the interface name and `virtual@af_packet` indicates that the interface is managed by the host kernel's network stack.| + | OPT_LISTEN_PORT | Yes | "5555" | TCP port on which the controller can establish connection with the traffic-engine.| + | OPT_NO_HUGEPAGES | Yes | "No" | If set to `Yes`, it disables hugepages in the OS. The hugepages needs to be disabled when the network interfaces are managed by the host kernel's stack.| + + Docker Parameters: + +- `--net=host`: This is required if the traffic-engine needs to bind to a network interface that is visible in the host network stack but not inside the docker's network. +- `--privileged`: This is required because the traffic-engine needs to exercise capabilities that require elevated privileges. +- `--cpuset-cpus`: The traffic-engine usually requires 1 shared CPU core for management activities and 2 exclusive CPU cores, each for the transmit engine and receive engine. The shared CPU core can be shared across multiple traffic-engines. For example, `--cpuset-cpus="0,1,2"` which indicates that cpu0 is shared, cpu1 is used for transmit and cpu2 is used for receive. If CPU cores are not specified, any arbitrary CPU cores will be chosen. + > If enough CPU cores are not provided, the available CPU cores may be shared among management, transmit, and the receive engines, that can occasionally result in lower performance. +- `-d`: This starts the container in background. + + Example: + + ```bash + docker run --net=host --privileged -d \ + -e OPT_LISTEN_PORT="5555" \ + -e ARG_IFACE_LIST="virtual@af_packet,eth1" \ + -e OPT_NO_HUGEPAGES="Yes" \ + --cpuset-cpus="0,1,2" \ + ghcr.io/open-traffic-generator/ixia-c-traffic-engine + ``` diff --git a/docs/deployments-kne.md b/docs/deployments-kne.md new file mode 100644 index 00000000..da64093d --- /dev/null +++ b/docs/deployments-kne.md @@ -0,0 +1,181 @@ +# Deploy Ixia-c using KNE + +Ixia-c can be deployed in the k8s environment by using the [Kubernetes Network Emulation](https://github.com/openconfig/kne) that consists of the following services: + +* **operator**: Serves API request from the clients and manages workflow across one or more traffic engines. +* **controller**: Serves API request from the clients and manages workflow across one or more traffic engines. +* **traffic-engine**: Generates, captures, and processes the traffic from one or more network interfaces (on linux-based OS). +* **protocol-engine**: Emulates layer3 networks and protocols such as BGP, ISIS, and etc (on linux-based OS). +* **gnmi-server**: Captures statistics from one or more network interfaces (on linux-based OS). + +## System Prerequisites + +### CPU and RAM + +Following are the recommended resources for a basic use-case. + +- `keng-operator`: Each instance requires at least 1 CPU core and 2GB RAM. +- `keng-controller`: Each instance requires at least 1 CPU core and 2GB RAM. +- `otg-gnmi-server`: Each instance requires at least 1 CPU core and 2GB RAM. +- `ixia-c-traffic-engine`: Each instance requires 2 dedicated CPU cores and 3GB dedicated RAM. +- `ixia-c-protocol-engine`: Each instance requires 4 dedicated CPU cores and 1GB dedicated RAM per port. + +### OS and Software Prerequisites + +- x86_64 Linux Distribution (Centos 7+ or Ubuntu 18+ have been tested) +- Docker 19+ (as distributed by https://docs.docker.com/) +- Go 1.17+ +- kind 0.18+ + +## Install KNE + +* The main use case we are interested in is the ability to bring up arbitrary topologies to represent a production topology. This would require multiple vendors as well as traffic generation and end hosts. + + ```sh + go install github.com/openconfig/kne/kne@latest + ``` + +## Deploy keng-operator + +* Ixia Operator defines CRD for Ixia network device (IxiaTG) and can be used to build up different network topologies with network devices from other vendors. Network interconnects between the topology nodes can be setup with various container network interface (CNI) plugins for Kubernetes for attaching multiple network interfaces to the nodes. + + ```sh + kubectl apply -f https://github.com/open-traffic-generator/keng-operator/releases/download/v0.3.5/ixiatg-operator.yaml + ``` + +## Apply configmap + +* The various Ixia component versions to be deployed is derived from the Ixia release version as specified in the IxiaTG config. These component mappings are captured in ixia-configmap.yaml for each Ixia release. The configmap, as shown in the snippet below, comprise of the Ixia release version ("release"), and the list of qualified component versions, for that release. Ixia Operator first tries to access these details from Keysight published releases; if unable to so, it tries to locate them in Kubernetes configmap. This allows users to have the operator load images from private repositories, by updating the configmap entries. Thus, for deployment with custom images, the user is expected to download release specific ixia-configmap.yaml from published releases. Then, in the configmap, update the specific container image "path" / "tag" fields and also update the "release" to some custom name. Start the operator first as specified in the deployment section below, before applying the configmap locally. After this the operator can be used to deploy the containers and services. + + * For community users, + + ```json + apiVersion: v1 + kind: ConfigMap + metadata: + name: ixiatg-release-config + namespace: ixiatg-op-system + data: + versions: | + { + "release": "0.1.0-53", + "images": [ + { + "name": "controller", + "path": "ghcr.io/open-traffic-generator/keng-controller", + "tag": "0.1.0-53" + }, + { + "name": "gnmi-server", + "path": "ghcr.io/open-traffic-generator/otg-gnmi-server", + "tag": "1.13.0" + }, + { + "name": "traffic-engine", + "path": "ghcr.io/open-traffic-generator/ixia-c-traffic-engine", + "tag": "1.6.0.85" + }, + { + "name": "protocol-engine", + "path": "ghcr.io/open-traffic-generator/ixia-c-protocol-engine", + "tag": "1.00.0.337" + }, + { + "name": "ixhw-server", + "path": "ghcr.io/open-traffic-generator/keng-layer23-hw-server", + "tag": "0.13.0-6" + } + ] + } + ``` + + * For commercial users, `LICENSE_SERVERS` needs to be specified for `keng-controller` deployment. + + ```json + apiVersion: v1 + kind: ConfigMap + metadata: + name: ixiatg-release-config + namespace: ixiatg-op-system + data: + versions: | + { + "release": "0.1.0-53", + "images": [ + { + "name": "controller", + "path": "ghcr.io/open-traffic-generator/keng-controller", + "tag": "0.1.0-53", + "env": { + "LICENSE_SERVERS": "ip/hostname of license server" + } + }, + { + "name": "gnmi-server", + "path": "ghcr.io/open-traffic-generator/otg-gnmi-server", + "tag": "1.13.0" + }, + { + "name": "traffic-engine", + "path": "ghcr.io/open-traffic-generator/ixia-c-traffic-engine", + "tag": "1.6.0.85" + }, + { + "name": "protocol-engine", + "path": "ghcr.io/open-traffic-generator/ixia-c-protocol-engine", + "tag": "1.00.0.337" + }, + { + "name": "ixhw-server", + "path": "ghcr.io/open-traffic-generator/keng-layer23-hw-server", + "tag": "0.13.0-6" + } + ] + } + ``` + + ```sh + # After saving the configmap snippet in a yaml file + kubectl apply -f ixiatg-configmap.yaml + ``` + +## Deploy the topology + +* The following snippet shows a simple KNE b2b topology. + + ```yaml + name: ixia-c + nodes: + - name: otg + vendor: KEYSIGHT + version: 0.1.0-53 + services: + 8443: + name: https + inside: 8443 + 40051: + name: grpc + inside: 40051 + 50051: + name: gnmi + inside: 50051 + links: + - a_node: otg + a_int: eth1 + z_node: otg + z_int: eth2 + ``` + + ```sh + # After saving the topology snippet in a yaml file + kne create topology.yaml + ``` + +* After deployment, you are now ready to run a test using this topology. + +## Destroy/Remove the topology + + ```sh + # delete a particular topology + kne delete topology.yaml + ``` diff --git a/docs/deployments.md b/docs/deployments.md index f48928bc..9892ffa2 100644 --- a/docs/deployments.md +++ b/docs/deployments.md @@ -1,114 +1,39 @@ -# Deployment Guide +# Deployment -- [Table of Contents](readme.md) - - Deployment Guide - * [Overview](#overview) - * [Bootstrap](#bootstrap) - * [Deployment Parameters](#deployment-parameters) - * [Diagnostics](#diagnostics) - * [Test Suite](#test-suite) - * [One-arm Scenario](#one-arm-scenario) - * [Two-arm Scenario](#two-arm-scenario) - * [Three-arm Mesh Scenario](#three-arm-mesh-scenario) +## Overview -### Overview +Ixia-c is distributed and deployed as a multi-container application that consists of the following services: -Ixia-c is distributed / deployed as a multi-container application consisting of following services: +* **controller**: Serves API request from the clients and manages workflow across one or more traffic engines. +* **traffic-engine**: Generates, captures, and processes traffic from one or more network interfaces (on linux-based OS). +* **app-usage-reporter**: (Optional) Collects anonymous usage report from the controller and uploads it to the Keysight Cloud, with minimal impact on the host resources. -* **controller** - Serves API request from clients and manages workflow across one or more traffic engines. -* **traffic-engine** - Generates, captures and processes traffic from one or more network interfaces (on linux-based OS). -* **app-usage-reporter** - (Optional) Collects anonymous usage report from controller and uploads it to Keysight Cloud, with minimal impact on host resources. +All these services are available as docker images on the [GitHub Open-Traffic-Generator repository](https://github.com/orgs/open-traffic-generator/packages). To use specific versions of these images, see [Ixia-c Releases](releases.md) . -All these services are available as docker images on [GitHub Open-Traffic-Generator repository](https://github.com/orgs/open-traffic-generator/packages). Please check [Ixia-c Releases](releases.md) to use specific versions of these images. +![ixia-c-aur](res/ixia-c-aur.drawio.svg "ixia-c-aur") -
- -
+> Once the services are deployed, [snappi-tests](https://github.com/open-traffic-generator/snappi-tests/tree/3ffe20f) (a collection of [snappi](https://pypi.org/project/snappi/) test scripts and configurations) can be setup to run against Ixia-c. -> Once the services are deployed, [snappi-tests](https://github.com/open-traffic-generator/snappi-tests/tree/247fa80), a collection of [snappi](https://pypi.org/project/snappi/) test scripts and configurations, can be setup to run against Ixia-c. +## Bootstrap -### Bootstrap +The Ixia-c services can either all be deployed on the same host or each on separate hosts (as long as they are mutually reachable over the network). There is no boot-time dependency between them, which allows **horizontal scalability** without interrupting the existing services. -Ixia-c services can either all be deployed on same host or each on separate hosts (as long as they're mutually reachable over network). There's no boot-time dependency between them, which allows for **horizontal scalability** without interrupting existing services. +You can establish a connectivity between the services in two ways. The options are as follows: -Following outlines how connectivity is established between the services: +- **controller & traffic-engine**: The client pushes a traffic configuration to the controller, containing the `location` of the traffic engine. +- **controller & app-usage-reporter**: The Controller periodically tries to establish connectivity with the `app-usage-reporter` on a `location`, which can be overridden by using the controller's deployment parameters. -* **controller & traffic-engine** - When client pushes a traffic configuration to controller containing `location` of traffic engine. -* **controller & app-usage-reporter** - Controller periodically tries to establish connectivity with app-usage-reporter on a `location` which can be overridden using controller's deployment parameters. +>The **location** (network address) of the traffic-engine and the app-usage-reporter must be reachable from the controller, even if they are not reachable from the client scripts. -The **location** (aka network address) of traffic-engine and app-usage-reporter must be reachable from controller, even if they're not reachable from client scripts. +## Deployment types -#### Using docker-compose +* [Using docker-compose](deployments-docker-compose.md) -Deploying multiple services manually (along with required parameters) may not be desired in some scenarios and hence, for convenience [deployments](../deployments) directory consists of `docker-compose` files, where: -* `*.yml` files describe services for a given scenario and deployment parameters required to start them. -* `.env` file holds default parameters to be used across all `*.yml` files, like name of interface, version of docker images, etc. +* [Using containerlab](deployments-containerlab.md) -If a concerned `.yml` file does not include certain variables from `.env`, those can then safely be ignored. -Here's how the usual workflow looks like when using `docker-compose`. +* [Using KNE](deployments-kne.md) -```sh -# change default parameters if needed; e.g. interface name, image version, etc. -vi deployments/.env -# deploy and start services -docker-compose -f deployments/.yml up -d -# stop and remove services deployed -docker-compose -f deployments/.yml down -``` - -On most systems, `docker-compose` needs to be installed separately even when docker is already installed. Please check [docker prerequisites](prerequisites.md#docker) for more details. - ->All the scenarios mentioned in upcoming sections describe both manual and automated (requiring docker-compose) steps. - -### Deployment Parameters - -#### Controller - - | Controller Parameters | Optional | Default | Description | - |-----------------------------|-----------|-------------------------|-----------------------------------------------------------------| - | --debug | Yes | false | Enables high volume logs with debug info for better diagnostics.| - | --disable-app-usage-reporter| Yes | false | Disables sending usage data to app-usage-reporter. | - | --http-port | Yes | 8443 | TCP port for HTTP server. | - | --aur-host | Yes | https://localhost:5600 | Overrides location of app-usage-reporter. | - | --accept-eula | No | NA | Indicates that user has accepted EULA, otherwise controller won't boot up | - - Docker Parameters: - * `--net=host` - This is recommended to allowing using host's network stack in order to address traffic-engine containers using `localhost` instead of `container-ip`, when deployed on same host. - * `-d` - This starts container in background. - - Example: - - ```bash - docker run --net=host -d ghcr.io/open-traffic-generator/keng-controller --accept-eula --debug --http-port 5050 - ``` - -#### Traffic Engine - - | Environment Variables | Optional | Default | Description | - |-----------------------------|-----------|-------------------------|-----------------------------------------------------------------| - | ARG_IFACE_LIST | No | NA | Name of the network interface to bind to. It must be visible to traffic-engine's network namespace. e.g. `virtual@af_packet,eth1` where `eth1` is interface name while `virtual@af_packet` indicates that the interface is managed by host kernel's network stack.| - | OPT_LISTEN_PORT | Yes | "5555" | TCP port on which controller can establish connection with traffic-engine.| - | OPT_NO_HUGEPAGES | Yes | "No" | Setting this to `Yes` disables hugepages in OS. The hugepages needs to be disabled when using network interfaces managed by host kernel's stack.| - - Docker Parameters: - * `--net=host` - This is needed if traffic-engine needs to bind to a network interface that is visible in host network stack but not inside docker's network. - * `--privileged` - This is needed because traffic-engine needs to exercise capabilities that require elevated privileges. - * `--cpuset-cpus` - The traffic-engine usually requires 1 shared CPU core for management activities and 2 exclusive CPU cores, each for transmit engine and receive engine. The shared CPU core can be shared across multiple traffic-engines. e.g. `--cpuset-cpus="0,1,2"` indicates that cpu0 is shared, cpu1 is used for transmit and cpu2 is used for receive. If CPU cores are not specified, arbitrary CPU cores will be chosen. - > If enough CPU cores are not provided, available CPU cores may be shared among management, transmit and receive engines, occasionally resulting in lower performance. - * `-d` - This starts container in background. - - Example: - - ```bash - docker run --net=host --privileged -d \ - -e OPT_LISTEN_PORT="5555" \ - -e ARG_IFACE_LIST="virtual@af_packet,eth1" \ - -e OPT_NO_HUGEPAGES="Yes" \ - --cpuset-cpus="0,1,2" \ - ghcr.io/open-traffic-generator/ixia-c-traffic-engine - ``` - -### Diagnostics +## Diagnostics Check and download controller logs: @@ -134,26 +59,30 @@ docker logs docker cp :/var/log/usstream/usstream.log ./ ``` -### Test Suite +## Test Suite -### One-arm Scenario +## One-arm Scenario > TODO: diagram * Automated ```bash - docker-compose -f deployments/raw-one-arm.yml up -d + docker-compose -f deployments/raw-one-arm.yml up -d # community users # optionally stop and remove services deployed - docker-compose -f deployments/raw-one-arm.yml down + docker-compose -f deployments/raw-one-arm.yml down # community users ``` * Manual ```bash # start controller and app usage reporter + + # community users docker run --net=host -d ghcr.io/open-traffic-generator/keng-controller --accept-eula - docker run --net=host -d ghcr.io/open-traffic-generator/ixia-c-app-usage-reporter + # commercial users + docker run --net=host -d ghcr.io/open-traffic-generator/keng-controller --accept-eula --license-servers="ip/hostname of license server" + docker run --net=host -d ghcr.io/open-traffic-generator/keng-app-usage-reporter # start traffic engine on network interface eth1, TCP port 5555 and cpu cores 0, 1, 2 docker run --net=host --privileged -d \ @@ -164,24 +93,27 @@ docker cp :/var/log/usstream/usstream.log ./ ghcr.io/open-traffic-generator/ixia-c-traffic-engine ``` -### Two-arm Scenario +## Two-arm Scenario > TODO: diagram * Automated ```bash - docker-compose -f deployments/raw-two-arm.yml up -d + docker-compose -f deployments/raw-two-arm.yml up -d # community users # optionally stop and remove services deployed - docker-compose -f deployments/raw-two-arm.yml down + docker-compose -f deployments/raw-two-arm.yml down # community users ``` * Manual ```bash # start controller and app usage reporter + # community users docker run --net=host -d ghcr.io/open-traffic-generator/keng-controller --accept-eula - docker run --net=host -d ghcr.io/open-traffic-generator/ixia-c-app-usage-reporter + # commercial users + docker run --net=host -d ghcr.io/open-traffic-generator/keng-controller --accept-eula --license-servers="ip/hostname of license server" + docker run --net=host -d ghcr.io/open-traffic-generator/keng-app-usage-reporter # start traffic engine on network interface eth1, TCP port 5555 and cpu cores 0, 1, 2 docker run --net=host --privileged -d \ @@ -200,26 +132,29 @@ docker cp :/var/log/usstream/usstream.log ./ ghcr.io/open-traffic-generator/ixia-c-traffic-engine ``` -### Three-arm Mesh Scenario +## Three-arm Mesh Scenario -This scenario binds traffic engine to management network interface belonging to the container which in turn is part of docker0 network. +This scenario binds traffic engine to the management network interface, that belongs to the container which in turn is a part of the docker0 network. > TODO: diagram * Automated ```bash - docker-compose -f deployments/raw-three-arm-mesh.yml up -d + docker-compose -f deployments/raw-three-arm-mesh.yml up -d # community users # optionally stop and remove services deployed - docker-compose -f deployments/raw-three-arm-mesh.yml down + docker-compose -f deployments/raw-three-arm-mesh.yml down # community users ``` * Manual ```bash # start controller and app usage reporter + # community users docker run --net=host -d ghcr.io/open-traffic-generator/keng-controller --accept-eula - docker run --net=host -d ghcr.io/open-traffic-generator/ixia-c-app-usage-reporter + # commercial users + docker run --net=host -d ghcr.io/open-traffic-generator/keng-controller --accept-eula --license-servers="ip/hostname of license server" + docker run --net=host -d ghcr.io/open-traffic-generator/keng-app-usage-reporter # start traffic engine on network interface eth0, TCP port 5555 and cpu cores 0, 1, 2 docker run --privileged -d \ @@ -264,7 +199,7 @@ This scenario binds traffic engine to management network interface belonging to ### Setup Tests - Please make sure that the client setup meets [Python Prerequisites](#test-prerequisites). + Ensure that the client setup meets the [Python Prerequisites](prerequisites.md#software-prerequisites). * **Install `snappi`.** @@ -278,7 +213,7 @@ This scenario binds traffic engine to management network interface belonging to python -m pip install --upgrade -r requirements.txt ``` -* **Ensure a `sample test` script executes successfully. Please see [test details](#test-details) for more info.** +* **Ensure that a `sample test` script executes successfully. For more information, see [test details](#test-details).** ```sh # provide intended API Server and port addresses @@ -289,26 +224,26 @@ This scenario binds traffic engine to management network interface belonging to ## Test Details -The test scripts are based on `snappi client SDK` (auto-generated from [Open Traffic Generator Data Model](https://github.com/open-traffic-generator/models)) and have been written using `pytest`. +The test scripts are based on `snappi client SDK` (auto-generated from the [Open Traffic Generator Data Model](https://github.com/open-traffic-generator/models)) and have been written by using `pytest`. -Open Traffic Generator Data Model can be accessed from any browser by hitting this url (https:///docs/) and start scripting. +You can access the Open Traffic Generator Data Model from any browser by using [https:///docs/](https:///docs/) and start scripting. The test directory structure is as follows: -* `snappi-tests/tests/settings.json` - global test configuration, includes `controller` host, `traffic-engine` host and `speed` settings. -* `snappi-tests/configs/` - contains pre-defined traffic configurations in JSON, which can be loaded by test scripts. -* `snappi-tests/tests` - contains end-to-end test scripts covering most common use-cases. -* `snappi-tests/tests/utils/` - contains most commonly needed helpers, used throughout test scripts. -* `snappi-tests/tests/env/bin/python` - python executable (inside virtual environment) to be used for test execution. +* `snappi-tests/tests/settings.json`: Global test configuration, that includes `controller` host, `traffic-engine` host, and `speed` settings. +* `snappi-tests/configs/`: Contains pre-defined traffic configurations in JSON, which can be loaded by test scripts. +* `snappi-tests/tests`: Contains end-to-end test scripts covering the most of the common use-cases. +* `snappi-tests/tests/utils/`: Contains the most commonly required helpers, that are used throughout the test scripts. +* `snappi-tests/tests/env/bin/python`: Python executable (inside virtual environment) to be used for test execution. -Most test scripts follow the format of following sample scripts: +The most of the test scripts use the following format: -* `snappi-tests/tests/raw/test_tcp_unidir_flows.py` - for unidirectional flow use case. -* `snappi-tests/tests/raw/test_tcp_bidir_flows.py` - for using pre-defined JSON traffic config & bidirectional flow use case. -* `snappi-tests/tests/raw/test_basic_flow_stats.py` - for basic flow statistics validation use case. -* `` - for validating capture. TODO -* `` - some example from gtpv2 [ethernet - ipv4 - udp - gtpv2 - ipv6] TODO -* `` - for one arm scenario TODO +* `snappi-tests/tests/raw/test_tcp_unidir_flows.py`: For unidirectional flow use case. +* `snappi-tests/tests/raw/test_tcp_bidir_flows.py`: For using pre-defined JSON traffic config & bidirectional flow use case. +* `snappi-tests/tests/raw/test_basic_flow_stats.py`: For basic flow statistics validation use case. +* ``: For validating capture. TODO +* ``: Some examples from gtpv2 [ethernet - ipv4 - udp - gtpv2 - ipv6] TODO +* ``: For one arm scenario TODO To execute batch tests marked as `sanity`: @@ -330,7 +265,7 @@ tests/env/bin/python -m pytest tests/py -m "sanity" } ``` -* When `controller` and `traffic-engine`s are located on same system (local - raw sockets) +* When `controller` and `traffic-engine`s are located on the same system (local - raw sockets) ```json { @@ -341,47 +276,3 @@ tests/env/bin/python -m pytest tests/py -m "sanity" ] } ``` - -## Deploy Ixia-c-one using containerlab - -### overview - -Ixia-c-one is deployed as single-container application using [containerlab](https://containerlab.dev/quickstart/) consisting of following services: - -* **containerlab** - Containerlab provides a CLI for orchestrating and managing container-based networking labs. It starts the containers, builds a virtual wiring between them to create lab topologies of users choice and manages labs lifecycle. -* **Ixia-c-one** - Keysight ixia-c-one is a single-container distribution of ixia-c, which in turn is Keysight's reference implementation of Open Traffic Generator API. -Meet [keysight_ixia-c-one](https://containerlab.dev/manual/kinds/keysight_ixia-c-one) kind! It is available from containerlab [release 0.26](https://containerlab.dev/rn/0.26/#keysight-ixia-c). -* **srl linux** - Nokia SR Linux is a truly open network operating system (NOS) that makes your data center switching infrastructure more scalable, more flexible and simpler to operate. - -
- -
- -### Install containerlab - ```sh - # download and install the latest release (may require sudo) - bash -c "$(curl -sL https://get.containerlab.dev)" - ``` - -### Deploy the topology - -* A sample topology definition you can find here https://containerlab.dev/lab-examples/ixiacone-srl/ which consists of Nokia SR Linux and Ixia-c-one nodes connected one to another. -* This consists of a Keysight ixia-c-one node with 2 ports connected to 2 ports on an srl linux node via two point-to-point ethernet links. Both nodes are also connected with their management interfaces to the containerlab docker network. - - ```sh - # After downloading the sample topology file - containerlab deploy --topo ixiac01.clab.yml - ``` - -- After deploying the topology now you are ready to run a test using this topology. - -### Run a test - -* Follow this [link](https://containerlab.dev/lab-examples/ixiacone-srl/#execution) to run a test. - -### Destroy/Remove the topology - - ```sh - # delete a particular topology - containerlab destroy --topo ixiac01.clab.yml - ``` \ No newline at end of file diff --git a/docs/developer/hello-snappi.md b/docs/developer/hello-snappi.md new file mode 100644 index 00000000..7b37b56b --- /dev/null +++ b/docs/developer/hello-snappi.md @@ -0,0 +1,396 @@ +## Use Case + +This tutorial explains some key elements that are required to write a **snappi script** for exercising the following topology. + +* Send 1000 UDP packets back and forth between the interfaces `eth1` & `eth2` at a rate of 1000 packets per second. +* Ensure that the correct number of valid UDP packets are received on both the ends, by using port capture and port metrics. + +The [hello_snappi.py](https://github.com/open-traffic-generator/snappi-tests/tree/3ffe20f/scripts/hello_snappi.py) script covers this extensively. + +![Ixia-C Deployment for Bidirectional Traffic](../res/ixia-c.drawio.svg) + +## Setup + +You can start by setting up the topology as described above. For more detail, see [deployment steps for two-arm scenario](../deployments.md#two-arm-scenario). + +```sh +git clone --recurse-submodules https://github.com/open-traffic-generator/ixia-c && cd ixia-c +docker-compose -f deployments/raw-two-arm.yml up -d +``` + +After the set up is completed, install the python packages: + +* [snappi](https://pypi.org/project/snappi/) - client SDK auto-generated from [Open Traffic Generator API](https://github.com/open-traffic-generator/models). +* [dpkt](https://pypi.org/project/dpkt/) - for processing `.pcap` files. + +```sh +python -m pip install --upgrade snappi==0.12.1 dpkt +``` + +## Create the API Handle + +The first step in any snappi script is to import the `snappi` package and instantiate an `api` object, where the `location` parameter takes the HTTPS/gRPC address of the controller and `verify` is used to turn off the insecure certificate warning. + +If the controller is deployed with a non-default TCP port by using the [deployment parameters](../deployments.md#deployment-parameters), it must be specified explicitly in the address (default port of HTTPS is 8443 and gRPC is 40051). + +```python +import snappi + +# HTTPS +api = snappi.api(location='https://localhost', verify=False) +# or with non-default TCP port +api = snappi.api(location='https://localhost:8080', verify=False) + +#gRPC +api = snappi.api(location="localhost:40051", transport=snappi.Transport.GRPC) +# or with non-default TCP port +api = snappi.api(location="localhost:50020", transport=snappi.Transport.GRPC) +``` + +
+Expand This section provides the details on an optional parameter ext which specifies the snappi extension to be loaded.
+ +If a traffic generator does not natively support the [Open Traffic Generator API](https://github.com/open-traffic-generator/models), snappi can be extended to write a translation layer to bridge the gap. For example, [snappi extension for IxNetwork](https://pypi.org/project/snappi-ixnetwork/). This can be installed by using `python -m pip install --upgrade snappi[ixnetwork]`. +```python +import snappi +# location here refers to HTTPS address of IxNetwork API Server +api = snappi.api(location="https://localhost", ext='ixnetwork', verify=False) +``` + +
+ +## Configuration + +You need to construct the traffic configuration to send it to the controller. Use the `api` object that you created previously. It will act as a handle for the following steps: + +* Create new objects for API request (or response) + + ```python + cfg = api.config() + ``` + + > `api.config()` is a factory function for creating an empty `snappi.Config` object, which encapsulates the parameters that the controller needs to configure different aspects of the traffic generator. The next sections discuss about these configuration parameters in details. + +* Initiate the API requests (and get back response) + + ```python + # this pushes object of type `snappi.Config` to controller + api.set_config(cfg) + # this retrieves back object of type `snappi.Config` from controller + cfg = api.get_config() + ``` + + > By default, API requests in snappi are made over HTTPS with payloads as a JSON string. Since each object in snappi inherits `SnappiObject` or `SnappiIter`, they all share a common method called `.serialize()` and `deserialize()`, that are used internally during the API requests, for valid conversion to / from a JSON string. You will find more about such conveniences offered by snappi along the way. + +
+Expand This section explains how you can effectively navigate through the snappi API documentation.
+ +The objects and methods (for API calls) in snappi are auto-generated from an [Open API Generator YAML file](https://redocly.github.io/redoc/?url=https://raw.githubusercontent.com/open-traffic-generator/models/v0.11.11/artifacts/openapi.yaml). This file adheres to the [OpenAPI Specification](https://github.com/OAI/OpenAPI-Specification), which can (by design) also be rendered as an interactive API documentation. + +[ReDoc](https://redocly.github.io/redoc/) is an open-source tool that provides a similar functionality. It accepts a link to valid OpenAPI YAML file and generates a document where all the methods (for API calls) are mentioned in the left navigation bar and for each selected method, there's a request/response body description in the center of the page. These descriptions lay out the entire object tree that documents each node in detail. + +The [snappi API documentation](https://redocly.github.io/redoc/?url=https://raw.githubusercontent.com/open-traffic-generator/models/v0.11.11/artifacts/openapi.yaml) will always point to the API version **v0.11.11**. To use a different version, do the following: + +* Identify the API version from [open-traffic-generator releases](https://github.com/open-traffic-generator/snappi/releases/download/v0.11.11/models-release) and replace **v0.11.11** in the URL with the intended snappi version. + +* Open the [open-traffic-generator models](https://redocly.github.io/redoc/?url=https://raw.githubusercontent.com/open-traffic-generator/models/v0.11.11/artifacts/openapi.yaml). + +
+ +## Ports + +Each instance of a **traffic-engine** is usually referred to as a `port`. As the ports are used to send or receive the traffic (as they are directly bound to the network interfaces), provide the following information to the config object, that you created earlier: + +* `name`: An unique identifier for each port. +* `location`: A DNS name or TCP socket address of the traffic-engine (format is specific to a given traffic-engine implementations). + +>Note: Unlike the config, creating a new port using `p = api.port()` is not required (and hence not supported), as the `snappi.Port` is never used directly as an API request or response. + +```python +# config has an attribute called `ports` which holds an iterator of type +# `snappi.PortIter`, where each item is of type `snappi.Port` (p1 and p2) +p1, p2 = cfg.ports.port(name="p1", location="localhost:5555").port( + name="p2", location="localhost:5556" +) +``` + +> Instead of using `append()`, use factory method `.port()` on `cfg.ports` which instantiates `snappi.Port`, appends it to `cfg.ports`, and returns the entire iterator (so that it can be unpacked or accessed like a simple list). This is applicable to other iterators in snappi, for example, flows, capture, and layer1. + +
+Expand this section for more examples on snappi iterators. + +```python +p = cfg.ports.port(name='p1').port(name='p2') +assert p[0].name == 'p1' + +p = cfg.ports.port(name='p3') +assert p[2].name == 'p3' + +# This will remove 3rd index port +cfg.ports.remove(2) +p4 = cfg.ports.port(name='p4')[-1] +assert p4.name == 'p4' + +# This will clear all the ports +cfg.ports.clear() +p5 = cfg.ports.port(name='p5')[0] +assert p5.name == 'p5' + +p6 = cfg.ports.add(name='p6') +assert p6.name == 'p6' + +p7 = p6.clone() +p7.name = 'p7' +cfg.ports.append(p7) +assert p7.name == 'p7' +``` + +
+ +## Layer1 + +The `ports` that you configured earlier, may require a set up for `layer1` (physical layer) properties like speed, MTU, promiscuous mode, and etc. + +```python +# config has an attribute called `layer1` which holds an iterator of type +# `snappi.Layer1Iter`, where each item is of type `snappi.Layer1` (ly) +ly = cfg.layer1.layer1(name="ly")[-1] +ly.speed = ly.SPEED_1_GBPS +# set same properties on both ports +ly.port_names = [p1.name, p2.name] +``` + +>Note: You can set an enum value (all uppercase) defined in the `ly`'s namespace, instead of using an arbitrary value to the `ly.speed`. These enum values are available in the [snappi API documentation](https://redocly.github.io/redoc/?url=https://raw.githubusercontent.com/open-traffic-generator/models/v0.11.11/artifacts/openapi.yaml). + +## Capture + +To start capturing packets on both the ports, enable `capture`. + +```python +# config has an attribute called `captures` which holds an iterator of type +# `snappi.CaptureIter`, where each item is of type `snappi.Capture` (cp) +cp = cfg.captures.capture(name="cp")[-1] +cp.port_names = [p1.name, p2.name] +``` + +### Flows + +This section describes how to set up the traffic flows. + +Each flow in snappi can be characterized based on the **tx/rx endpoints**, **duration**, **packet contents, packet rate, packet size**, and etc. + +You can configure two flows, one that originates from port `p1` and the other from port `p2`. + +```python +# config has an attribute called `flows` which holds an iterator of type +# `snappi.FlowIter`, where each item is of type `snappi.Flow` (f1, f2) +f1, f2 = cfg.flows.flow(name="flow p1->p2").flow(name="flow p2->p1") + +# and assign source and destination ports for each +f1.tx_rx.port.tx_name, f1.tx_rx.port.rx_name = p1.name, p2.name +f2.tx_rx.port.tx_name, f2.tx_rx.port.rx_name = p2.name, p1.name + +# configure packet size, rate and duration for both flows +f1.size.fixed, f2.size.fixed = 128, 256 +for f in cfg.flows: + # send 1000 packets and stop + f.duration.fixed_packets.packets = 1000 + # send 1000 packets per second + f.rate.pps = 1000 +``` + +Optionally, the flow duration and rate can be configured as follows: + +```python +# send packets for 5 seconds and stop (we could also specify duration in terms +# of continuous or bursts) +f.duration.fixed_seconds.seconds = 5 +# send packets at 50% of configured speed (we could also specify absolute rates +# in terms of bps, kbps, etc.) +f.rate.percentage = 50 +``` + +>Note: The `f.rate` is **polymorphic** in nature. It can only be used to set either `pps` or `percentage`, but not both. A special attribute `choice` is used in such cases, which holds the name of the attribute that is currently in use. + +In snappi, `f.rate.choice` is automatically set based on the attribute that was last accessed. For example, + +```python +f.rate.pps = 100 +print(f.rate.serialize()) + +# output +{ + "choice": "pps", + "pps": 100 +} +``` + +>You can set (or access) the `f1.rate.pps` without instantiating an object of type `snappi.FlowRate`, which is held by the `f1.rate`. **Accessing an uninitialized attribute** automatically initializes it with the type of object it holds. + +## Protocol Headers + +Packets sent out in a `flow` needs to be described in terms of the underlying **protocol** and **payload** contents. If no such description is provided, a simple ethernet frame is configured by default. + +The following section describes how you can construct a packet by adding Ethernet, IPv4, and UDP headers (strictly in an order, in which it should appear in the TCP/IP stack). + +```python +# configure packet with Ethernet, IPv4 and UDP headers for both flows +eth1, ip1, udp1 = f1.packet.ethernet().ipv4().udp() +eth2, ip2, udp2 = f2.packet.ethernet().ipv4().udp() +``` + +The `f1.packet` is an iterator which holds the items of type `snappi.FlowHeader` (a **polymorphic** type, instead of the **non-polymorphic** types). Hence, snappi automatically does the following under the hood: + +```python +eth1, ip1, udp1 = f.packet.header().header().header() +# set enum choice for each header and initialize intended object with empty +# fields just by accessing it +eth1.choice = e.ETHERNET +eth1.ethernet +ip1.choice = i.IPV4 +ip1.ipv4 +udp1.choice = u.UDP +udp1.udp +``` + +At this point, the headers still contain the default field values. Now, you can assign specific values to the various header fields. + +> The checksum and length fields in the most of the headers are automatically calculated and inserted before the packets are sent. + +### Setup Ethernet + +For the Ethernet header, assign a static source and the destination MAC address value. The ethernet type field is *automatically* set to `0x800`, since the next header is IPv4. + +```python +# set source and destination MAC addresses +eth1.src.value, eth1.dst.value = "00:AA:00:00:04:00", "00:AA:00:00:00:AA" +eth2.src.value, eth2.dst.value = "00:AA:00:00:00:AA", "00:AA:00:00:04:00" +``` + +### Setup IPv4 + +For IPv4 header also, assign a static source and the destination IPv4 address value. The IP protocol field is *automatically* set to `0x11`, since the next protocol in the stack is UDP. + +```python +# set source and destination IPv4 addresses +ip1.src.value, ip1.dst.value = "10.0.0.1", "10.0.0.2" +ip2.src.value, ip2.dst.value = "10.0.0.2", "10.0.0.1" +``` + +### Setup UDP + +With the UDP header, instead of assigning a single (fixed) value for the header fields, assign multiple values. + +You can achieve this in snappi by using `increment`, `decrement`, and `list` patterns. + +```python +# set incrementing port numbers as source UDP ports +udp1.src_port.increment.start = 5000 +udp1.src_port.increment.step = 2 +udp1.src_port.increment.count = 10 + +udp2.src_port.increment.start = 6000 +udp2.src_port.increment.step = 4 +udp2.src_port.increment.count = 10 + +# assign list of port numbers as destination UDP ports +udp1.dst_port.values = [4000, 4044, 4060, 4074] +udp2.dst_port.values = [8000, 8044, 8060, 8074, 8082, 8084] +``` + +The above snippet will result in a sequence of packets as shown in the figure below. + +![hello-snappi-packets](../res/hello-snappi-packets.png) + +> The patterns for headers fields in snappi provide a very flexible way to generate millions of unique packets to test the DUT functionalities, like hashing based on 5-tuple. For more information, see [common snappi constructs](snappi-constructs.md) . + +## Start Capture and Traffic + +After you have added all the intended configuration parameters to the `cfg`, do the following: + +* Push it to the controller, so that the connection with the intended traffic-engines can be established and the intended configuration is applied (to each one of them). +* Start capturing packets on the configured ports. +* Start sending packets from the configured ports. + +Every time the `api.set_config()` is called, it essentially resets the state of the controller by **tearing down** any previous connections with traffic-engines and **overriding** any previous configuration. If the call fails at some point, `api.get_config()` will return an empty config. + +```python +# push configuration to controller +api.set_config(cfg) + +# start packet capture on configured ports +cs = api.capture_state() +cs.state = cs.START +api.set_capture_state(cs) + +# start transmitting configured flows +ts = api.transmit_state() +ts.state = ts.START +api.set_transmit_state(ts) +``` + +> The transmit or capture will be started on all configured flows or ports respectively, unless you provide any specific flow or port name. For example, `cs.port_names = ['p1']`, `ts.flow_names = ['f1']`. + +## Fetch and Validate Metrics + +As you are sending 1000 packets, at a rate of 1000 packets per second, it should take 1 second for the transmit to complete. You can validate the same by using `metrics`. + +The API supports different kinds of metrics, but focus on the `port_metrics` which are similar to the linux network interface stats. + +```python +# create a port metrics request and filter based on port names +req = api.metrics_request() +req.port.port_names = [p.name for p in cfg.ports] +# include only sent and received packet counts +req.port.column_names = [req.port.FRAMES_TX, req.port.FRAMES_RX] + +# fetch port metrics +res = api.get_metrics(req) + +# calculate total frames sent and received across all configured ports +total_tx = sum([m.frames_tx for m in res.port_metrics]) +total_rx = sum([m.frames_rx for m in res.port_metrics]) +expected = sum([f.duration.fixed_packets.packets for f in cfg.flows]) + +assert expected == total_tx and total_rx >= expected +``` + +> Note: Usually this snippet needs to be executed multiple times, until the assertion in the end stands true or a timeout occurs. You can use a function called `wait_for()` in the `hello_snappi.py` script to achieve this. + +## Fetch and Validate Captures + +Validation by using metrics is limited to counters (for example, total transmitted, total received). To really inspect each packet received, you can use the capture API. + +This API is a little different from the others, in the following ways: + +* It returns a sequence of raw bytes (representing `.pcap` file) instead of a JSON string. +* It needs to be fed to a tool that can inspect `.pcap` files. For example, `dpkt` or `tcpdump`. + +This snippet uses `dpkt` to ensure that each packet received is a valid UDP packet. + +```python +for p in cfg.ports: + # create capture request and filter based on port name + req = api.capture_request() + req.port_name = p.name + # fetch captured pcap bytes and feed it to pcap parser dpkt + pcap = dpkt.pcap.Reader(api.get_capture(req)) + for _, buf in pcap: + # check if current packet is a valid UDP packet + eth = dpkt.ethernet.Ethernet(buf) + assert isinstance(eth.data.data, dpkt.udp.UDP) +``` + +Optionally, the following snippet can be used in order to do `tcpdump -r cap.pcap` (inspect captures by using tcpdump). + +```python +pcap_bytes = api.get_capture(req) +with open('cap.pcap', 'wb') as p: + p.write(pcap_bytes.read()) +``` + +## Putting It All Together + +`snappi` provides a fair level of abstraction and ease-of-use while constructing traffic configuration, compared to the [equivalent in JSON](https://github.com/open-traffic-generator/snappi-tests/tree/3ffe20f/configs/hello_snappi.json). More such comparisons can be found in [common snappi constructs](snappi-constructs.md). + +For more information on snappi (per-flow metrics, latency measurements, custom payloads, and etc) and examples on the pytest-based test scripts and utilities, see [snappi-tests](https://github.com/open-traffic-generator/snappi-tests/tree/3ffe20f). diff --git a/docs/developer/introduction.md b/docs/developer/introduction.md new file mode 100644 index 00000000..196ad0c1 --- /dev/null +++ b/docs/developer/introduction.md @@ -0,0 +1,3 @@ +# Developer guide introduction + +Introduction to snappi diff --git a/docs/snappi-constructs.md b/docs/developer/snappi-constructs.md similarity index 94% rename from docs/snappi-constructs.md rename to docs/developer/snappi-constructs.md index 7b994aaa..d5b55ebf 100644 --- a/docs/snappi-constructs.md +++ b/docs/developer/snappi-constructs.md @@ -1,24 +1,5 @@ # Common snappi constructs -- [Table of Contents](readme.md) - - Common snappi constructs - * [Overview](#overview) - * [Flows](#flows) - * [Unidirectional Flow](#unidirectional-flow) - * [Bidirectional Flows](#bidirectional-flows) - * [Meshed Flows](#meshed-flows) - * [Protocol Headers With Fixed Fields](#protocol-headers-with-fixed-fields) - * [Protocol Headers With Varying Fields](#protocol-headers-with-varying-fields) - * [Start Flow Transmit](#start-flow-transmit) - * TODO: custom headers - * [Capture](#capture) - * [Capture Configuration](#capture-configuration) - * [Start Capture](#start-capture) - * [Get Capture](#get-capture) - * [Metrics](#metrics) - * [Port Metrics](#port-metrics) - * [Flow Metrics](#flow-metrics) - ## Overview Every object in snappi can be serialized to or deserialized from a JSON string which conforms to [Open Traffic Generator API](https://github.com/open-traffic-generator/models). This facilitates storing traffic configurations as JSON files and reusing them in API calls with or without further modifications. diff --git a/docs/developer/snappi-install.md b/docs/developer/snappi-install.md new file mode 100644 index 00000000..99e57480 --- /dev/null +++ b/docs/developer/snappi-install.md @@ -0,0 +1,196 @@ +# Installing Snappi + +The procedures explained in this section helps to install and configure snappi for an Open Traffic Generator API. + +The test scripts written in **gosnappi**, and the auto-generated Go SDK, can be executed against any traffic generator that conforms to [Open Traffic Generator API](https://github.com/open-traffic-generator/models). + +[Ixia-c](https://github.com/open-traffic-generator/ixia-c) is one of such reference implementations of the Open Traffic Generator API. + +## To install Snappi for the Go language, do the following: + +### Setup the client + +```sh +go get github.com/open-traffic-generator/snappi/gosnappi +``` + +### Start Testing + +```Go +package examples + +import ( + "encoding/hex" + "testing" + "time" + + "github.com/open-traffic-generator/snappi/gosnappi" +) + +func TestQuickstart(t *testing.T) { + // Create a new API handle to make API calls against OTG + api := gosnappi.NewApi() + + // Set the transport protocol to HTTP + api.NewHttpTransport().SetLocation("https://localhost:8443") + + // Create a new traffic configuration that will be set on OTG + config := api.NewConfig() + + // Add a test port to the configuration + ptx := config.Ports().Add().SetName("ptx").SetLocation("veth-a") + + // Configure a flow and set previously created test port as one of endpoints + flow := config.Flows().Add().SetName("f1") + flow.TxRx().Port().SetTxName(ptx.Name()) + // and enable tracking flow metrics + flow.Metrics().SetEnable(true) + + // Configure number of packets to transmit for previously configured flow + flow.Duration().FixedPackets().SetPackets(100) + // and fixed byte size of all packets in the flow + flow.Size().SetFixed(128) + + // Configure protocol headers for all packets in the flow + pkt := flow.Packet() + eth := pkt.Add().Ethernet() + ipv4 := pkt.Add().Ipv4() + udp := pkt.Add().Udp() + cus := pkt.Add().Custom() + + eth.Dst().SetValue("00:11:22:33:44:55") + eth.Src().SetValue("00:11:22:33:44:66") + + ipv4.Src().SetValue("10.1.1.1") + ipv4.Dst().SetValue("20.1.1.1") + + // Configure repeating patterns for source and destination UDP ports + udp.SrcPort().SetValues([]int32{5010, 5015, 5020, 5025, 5030}) + udp.DstPort().Increment().SetStart(6010).SetStep(5).SetCount(5) + + // Configure custom bytes (hex string) in payload + cus.SetBytes(hex.EncodeToString([]byte("..QUICKSTART SNAPPI.."))) + + // Optionally, print JSON representation of config + if j, err := config.ToJson(); err != nil { + t.Fatal(err) + } else { + t.Log("Configuration: ", j) + } + + // Push traffic configuration constructed so far to OTG + if _, err := api.SetConfig(config); err != nil { + t.Fatal(err) + } + + // Start transmitting the packets from configured flow + ts := api.NewTransmitState() + ts.SetState(gosnappi.TransmitStateState.START) + if _, err := api.SetTransmitState(ts); err != nil { + t.Fatal(err) + } + + // Fetch metrics for configured flow + req := api.NewMetricsRequest() + req.Flow().SetFlowNames([]string{flow.Name()}) + // and keep polling until either expectation is met or deadline exceeds + deadline := time.Now().Add(10 * time.Second) + for { + metrics, err := api.GetMetrics(req) + if err != nil || time.Now().After(deadline) { + t.Fatalf("err = %v || deadline exceeded", err) + } + // print YAML representation of flow metrics + t.Log(metrics) + if metrics.FlowMetrics().Items()[0].Transmit() == gosnappi.FlowMetricTransmit.STOPPED { + break + } + time.Sleep(100 * time.Millisecond) + } +} +``` + +## To install Snappi for the Python language, do the following: + +### Setup the Client + +```sh +python -m pip install --upgrade snappi +``` + +### Start Testing + +```python +import datetime +import time +import snappi +import pytest + + +@pytest.mark.example +def test_quickstart(): + # Create a new API handle to make API calls against OTG + # with HTTP as default transport protocol + api = snappi.api(location="https://localhost:8443") + + # Create a new traffic configuration that will be set on OTG + config = api.config() + + # Add a test port to the configuration + ptx = config.ports.add(name="ptx", location="veth-a") + + # Configure a flow and set previously created test port as one of endpoints + flow = config.flows.add(name="flow") + flow.tx_rx.port.tx_name = ptx.name + # and enable tracking flow metrics + flow.metrics.enable = True + + # Configure number of packets to transmit for previously configured flow + flow.duration.fixed_packets.packets = 100 + # and fixed byte size of all packets in the flow + flow.size.fixed = 128 + + # Configure protocol headers for all packets in the flow + eth, ip, udp, cus = flow.packet.ethernet().ipv4().udp().custom() + + eth.src.value = "00:11:22:33:44:55" + eth.dst.value = "00:11:22:33:44:66" + + ip.src.value = "10.1.1.1" + ip.dst.value = "20.1.1.1" + + # Configure repeating patterns for source and destination UDP ports + udp.src_port.values = [5010, 5015, 5020, 5025, 5030] + udp.dst_port.increment.start = 6010 + udp.dst_port.increment.step = 5 + udp.dst_port.increment.count = 5 + + # Configure custom bytes (hex string) in payload + cus.bytes = "".join([hex(c)[2:] for c in b"..QUICKSTART SNAPPI.."]) + + # Optionally, print JSON representation of config + print("Configuration: ", config.serialize(encoding=config.JSON)) + + # Push traffic configuration constructed so far to OTG + api.set_config(config) + + # Start transmitting the packets from configured flow + ts = api.transmit_state() + ts.state = ts.START + api.set_transmit_state(ts) + + # Fetch metrics for configured flow + req = api.metrics_request() + req.flow.flow_names = [flow.name] + # and keep polling until either expectation is met or deadline exceeds + start = datetime.datetime.now() + while True: + metrics = api.get_metrics(req) + if (datetime.datetime.now() - start).seconds > 10: + raise Exception("deadline exceeded") + # print YAML representation of flow metrics + print(metrics) + if metrics.flow_metrics[0].transmit == metrics.flow_metrics[0].STOPPED: + break + time.sleep(0.1) +``` diff --git a/docs/faq.md b/docs/faq.md index a0f10f3a..5ab77196 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -40,7 +40,7 @@ Where can I find a tutorial on snappi?
-The [Hello snappi](hello-snappi.md) tutorial is a good starting point to get familiar with `snappi`. +The [Hello snappi](hello-snappi.md) tutorial is a good starting point to get familiar with `snappi`.
@@ -79,7 +79,7 @@ How to find version of Open Traffic Generator spec implemented by Ixia-c?
-Open Traffic Generator Data Model can be accessed from any browser by pointing it to (https://\/docs/openapi.json). The `info` section contains the `version` of the Open Traffic Generator Data Model implemented by the Ixia-c controller. +Open Traffic Generator Data Model can be accessed from any browser by pointing it to (https://\/docs/openapi.json). The `info` section contains the `version` of the Open Traffic Generator Data Model implemented by the KENG controller.
@@ -91,7 +91,7 @@ What do packets look like?
-Ixia packet testers utilize a proprietary flow-tracking technique which involves inserting a special *instrumentation header* into the packet. It is inserted after the last valid protocol header ie, in the payload. +Ixia packet testers utilize a proprietary flow-tracking technique which involves inserting a special *instrumentation header* into the packet. It is inserted after the last valid protocol header ie, in the payload.

@@ -153,7 +153,7 @@ What is Application Usage Reporter?
-The `app-usage-reporter` container collects and uploads to the Keysight cloud some basic telemetry information from the Ixia-c controller. This information helps Keysight improve the controller by focusing on the features that are being used by end users. +The `app-usage-reporter` container collects and uploads to the Keysight cloud some basic telemetry information from the KENG controller. This information helps Keysight improve the controller by focusing on the features that are being used by end users.
@@ -199,7 +199,7 @@ What are the limitations of the free version of Ixia-c?
-The free version of Ixia-c controller supports up to 4 ports in one session and the Ixia-c traffic-engine is limited to running over `raw` sockets. +The free version of KENG controller supports up to 4 ports in one session and the Ixia-c traffic-engine is limited to running over `raw` sockets.
@@ -231,7 +231,7 @@ Contact your Keysight Sales Rep or reach out to us [here](https://www.keysight.c
-How do I view Ixia-c controller logs? +How do I view KENG controller logs?
@@ -240,11 +240,11 @@ Use `docker logs` to view the controller log.
-What is the message "App usage reporting service is down" in Ixia-c controller log? +What is the message "App usage reporting service is down" in KENG controller log?
-This message indicates that the `app-usage-reporter` container is not reachable from the Ixia-c controller. This does NOT affect Ixia-c controller's normal operation. Refer to [Deployment Parameters](deployments.md#deployment-parameters) for more details on how to override the default location for the app-usage-reporter or how to disable it all together. +This message indicates that the `app-usage-reporter` container is not reachable from the KENG controller. This does NOT affect KENG controller's normal operation. Refer to [Deployment Parameters](deployments.md#deployment-parameters) for more details on how to override the default location for the app-usage-reporter or how to disable it all together.
## Support diff --git a/docs/hello-snappi.md b/docs/hello-snappi.md deleted file mode 100644 index b9f570ad..00000000 --- a/docs/hello-snappi.md +++ /dev/null @@ -1,418 +0,0 @@ -
-

Hello, snappi !

-

Your first snappi script

-
- -- [Table of Contents](readme.md) - - Hello, snappi ! - * [Use Case](#use-case) - * [Setup](#setup) - * [Create API Handle](#create-api-handle) - * [Config](#config) - * [Ports](#ports) - * [Config](#config) - * [Layer1](#layer1) - * [Capture](#capture) - * [Flows](#flows) - * [Protocol Headers](#protocol-headers) - * [Start Capture and Traffic](#start-capture-and-traffic) - * [Fetch and Validate Metrics](#fetch-and-validate-metrics) - * [Fetch and Validate Captures](#fetch-and-validate-captures) - * [Putting It All Together](#putting-it-all-together) - -### Use Case - -In this tutorial, we will walk through some key elements required to write a **snappi script** exercising the topology below. - -* Send 1000 UDP packets back and forth between interfaces eth1 & eth2 at a rate of 1000 packets per second. -* Ensure that indeed correct number of valid UDP packets are received on both ends using port capture and port metrics. - -The script [hello_snappi.py](https://github.com/open-traffic-generator/snappi-tests/tree/247fa80/scripts/hello_snappi.py) covers this extensively. -
- -
- -### Setup - -We start by setting up the topology as described above using [deployment steps for two-arm scenario](deployments.md#two-arm-scenario). - -```sh -git clone --recurse-submodules https://github.com/open-traffic-generator/ixia-c && cd ixia-c -docker-compose -f deployments/raw-two-arm.yml up -d -``` - -And installing python packages: - -* [snappi](https://pypi.org/project/snappi/) - client SDK auto-generated from [Open Traffic Generator API](https://github.com/open-traffic-generator/models). -* [dpkt](https://pypi.org/project/dpkt/) - for processing `.pcap` files. - -```sh -python -m pip install --upgrade snappi==0.12.6 dpkt -``` - -### Create API Handle - -The first step in any snappi script is to import the `snappi` package and instantiate an `api` object, where `location` parameter takes the HTTPS/gRPC address of the controller and `verify` is used to turn off insecure certificate warning. - -If the controller is deployed with a non-default TCP port using [deployment parameters](deployments.md#deployment-parameters), it must be specified explicitly in the address (default port of HTTPS is 8443 and gRPC is 40051). - -```python -import snappi - -# HTTPS -api = snappi.api(location='https://localhost', verify=False) -# or with non-default TCP port -api = snappi.api(location='https://localhost:8080', verify=False) - -#gRPC -api = snappi.api(location="localhost:40051", transport=snappi.Transport.GRPC) -# or with non-default TCP port -api = snappi.api(location="localhost:50020", transport=snappi.Transport.GRPC) -``` - -
-Expand this section for details on an optional parameter ext which specifies snappi extension to be loaded.
- -If a traffic generator doesn't natively support [Open Traffic Generator API](https://github.com/open-traffic-generator/models), snappi can be extended to write a translation layer to bridge the gap. An example is [snappi extension for IxNetwork](https://pypi.org/project/snappi-ixnetwork/) which can be installed using `python -m pip install --upgrade snappi[ixnetwork]`. - -```python -import snappi -# location here refers to HTTPS address of IxNetwork API Server -api = snappi.api(location="https://localhost", ext='ixnetwork', verify=False) -``` - -
- -### Config - -We now need to construct traffic configuration to be sent to controller. We'll need `api` object created previously, which acts as a handle for: - -* Creating new objects for API request (or response) - - ```python - cfg = api.config() - ``` - - > `api.config()` is a factory function for creating an empty `snappi.Config` object, which encapsulates the parameters that controller needs to configure different aspects of traffic generator. In next sections, we'll discuss in details about these configuration parameters. - -* Initiating API requests (and getting back response) - - ```python - # this pushes object of type `snappi.Config` to controller - api.set_config(cfg) - # this retrieves back object of type `snappi.Config` from controller - cfg = api.get_config() - ``` - - > By default, API requests in snappi are made over HTTPS with payloads as a JSON string. Since each object in snappi inherits `SnappiObject` or `SnappiIter`, they all share a common method called `.serialize()` and `deserialize()`, used internally during API requests, for valid conversion to / from a JSON string. We'll discuss about more such conveniences offered by snappi along the way. - -
-Expand this section for details on how to effectively navigate through snappi API documentation.
- -The objects and methods (for API calls) in snappi are auto-generated from an [Open API Generator YAML file](https://redocly.github.io/redoc/?url=https://raw.githubusercontent.com/open-traffic-generator/models/v0.12.5/artifacts/openapi.yaml). This file adheres to [OpenAPI Specification](https://github.com/OAI/OpenAPI-Specification), which can (by design) also be rendered as an interactive API documentation. - -[ReDoc](https://redocly.github.io/redoc/) is an open-source tool that does this. It accepts a link to valid OpenAPI YAML file and generates a document where all the methods (for API calls) are mentioned in the left navigation bar and for each selected method, there's a request / response body description in the center of the page. These descriptions lay out the entire object tree documenting each node in details. - -The snappi API documentation linked above will always point to API version **v0.12.5**. To use a different API version instead: - -* Identify API version by opening this link in a browser and replacing **v0.12.6** in URL with intended snappi version. - -* Open this link in a browser after replacing **v0.12.5** in URL with intended API version. - -
- -### Ports - -Each instance of a **traffic-engine** is usually referred to as a `port`. They're used to send or receive traffic (as they're directly bound to network interfaces) and hence, the config object we created previously needs to know about their: -* `name` - to uniquely identify each port. -* `location` - a DNS name or TCP socket address of traffic-engine (format is specific to a given traffic-engine implementations). - -Note, unlike config, creating a new port using `p = api.port()` is not required (and hence not supported), because `snappi.Port` is never used directly as an API request or response. - -```python -# config has an attribute called `ports` which holds an iterator of type -# `snappi.PortIter`, where each item is of type `snappi.Port` (p1 and p2) -p1, p2 = cfg.ports.port(name="p1", location="localhost:5555").port( - name="p2", location="localhost:5556" -) -``` - -> Instead of using `append()`, we use factory method `.port()` on `cfg.ports` which instantiates `snappi.Port`, appends it to `cfg.ports` and returns the entire iterator (so that it can be unpacked or accessed like a simple list). This is applicable to other iterators in snappi, e.g. flows, capture and layer1. - -
-Expand this section for more examples on snappi iterators. - -```python -p = cfg.ports.port(name='p1').port(name='p2') -assert p[0].name == 'p1' - -p = cfg.ports.port(name='p3') -assert p[2].name == 'p3' - -# This will remove 3rd index port -cfg.ports.remove(2) -p4 = cfg.ports.port(name='p4')[-1] -assert p4.name == 'p4' - -# This will clear all the ports -cfg.ports.clear() -p5 = cfg.ports.port(name='p5')[0] -assert p5.name == 'p5' - -p6 = cfg.ports.add(name='p6') -assert p6.name == 'p6' - -p7 = p6.clone() -p7.name = 'p7' -cfg.ports.append(p7) -assert p7.name == 'p7' -``` - -
- -### Layer1 - -The `ports` we configured previously may require setting `layer1` (physical layer) properties like speed, MTU, promiscuous mode, etc. - -```python -# config has an attribute called `layer1` which holds an iterator of type -# `snappi.Layer1Iter`, where each item is of type `snappi.Layer1` (ly) -ly = cfg.layer1.layer1(name="ly")[-1] -ly.speed = ly.SPEED_1_GBPS -# set same properties on both ports -ly.port_names = [p1.name, p2.name] -``` - -> Note how instead of setting an arbitrary value to `ly.speed`, we set an enum value (all uppercase) defined in `ly`'s namespace. These enum values are detailed in snappi API documentation. - -### Capture - -Since we also intend to start capturing packets on both ports, we enable `capture` like so. - -```python -# config has an attribute called `captures` which holds an iterator of type -# `snappi.CaptureIter`, where each item is of type `snappi.Capture` (cp) -cp = cfg.captures.capture(name="cp")[-1] -cp.port_names = [p1.name, p2.name] -``` - -### Flows - -We now get to the meat of our script, the part that sets up the traffic `flows`! Each flow in snappi can be characterized based on **tx/rx endpoints**, **duration**, **packet contents / rate / size**, etc. - -Here we configure two flows, one originating from port `p1` and the other from port `p2`. - -```python -# config has an attribute called `flows` which holds an iterator of type -# `snappi.FlowIter`, where each item is of type `snappi.Flow` (f1, f2) -f1, f2 = cfg.flows.flow(name="flow p1->p2").flow(name="flow p2->p1") - -# and assign source and destination ports for each -f1.tx_rx.port.tx_name, f1.tx_rx.port.rx_name = p1.name, p2.name -f2.tx_rx.port.tx_name, f2.tx_rx.port.rx_name = p2.name, p1.name - -# configure packet size, rate and duration for both flows -f1.size.fixed, f2.size.fixed = 128, 256 -for f in cfg.flows: - # send 1000 packets and stop - f.duration.fixed_packets.packets = 1000 - # send 1000 packets per second - f.rate.pps = 1000 -``` - -Optionally, flow duration and rate could be configured like so: - -```python -# send packets for 5 seconds and stop (we could also specify duration in terms -# of continuous or bursts) -f.duration.fixed_seconds.seconds = 5 -# send packets at 50% of configured speed (we could also specify absolute rates -# in terms of bps, kbps, etc.) -f.rate.percentage = 50 -``` - -Note that `f.rate` is **polymorphic** in nature, in that, it can only be used to set either `pps` or `percentage`, but not both. A special attribute `choice` is used in such cases, which holds the name of attribute currently in use. - -In snappi, `f.rate.choice` is automatically set based on the attribute that was last accessed. e.g. - -```python -f.rate.pps = 100 -print(f.rate.serialize()) - -# output -{ - "choice": "pps", - "pps": 100 -} -``` - ->We are able to set (or access) `f1.rate.pps` without instantiating object of type `snappi.FlowRate` held by `f1.rate`. This is because **accessing an uninitialized attribute** automatically initializes it with the type of object it holds. - -### Protocol Headers - -Packets sent out in a `flow` needs to be described in terms of underlying **protocol** and **payload** contents. If no such description is provided, a simple ethernet frame is configured by default. - -Here's how we construct our packet by adding Ethernet, IPv4 and UDP headers (strictly in an order it should appear in TCP/IP stack). - -```python -# configure packet with Ethernet, IPv4 and UDP headers for both flows -eth1, ip1, udp1 = f1.packet.ethernet().ipv4().udp() -eth2, ip2, udp2 = f2.packet.ethernet().ipv4().udp() -``` - -`f1.packet` is an iterator which holds items of type `snappi.FlowHeader` (a **polymorphic** type instead of **non-polymorphic** types we've seen so far). Hence, snappi automatically does following under the hood: - -```python -eth1, ip1, udp1 = f.packet.header().header().header() -# set enum choice for each header and initialize intended object with empty -# fields just by accessing it -eth1.choice = e.ETHERNET -eth1.ethernet -ip1.choice = i.IPV4 -ip1.ipv4 -udp1.choice = u.UDP -udp1.udp -``` - -At this point, the headers still contain default field values. Next, we'll assign specific values to various header fields. - -> The checksum and length fields in most headers are automatically calculated and inserted before sending out the packet. - -#### Setup Ethernet - -For Ethernet header, we simply assign static source and destination MAC address value. The ethernet type field is *automatically* set to `0x800` since the next header is IPv4. - -```python -# set source and destination MAC addresses -eth1.src.value, eth1.dst.value = "00:AA:00:00:04:00", "00:AA:00:00:00:AA" -eth2.src.value, eth2.dst.value = "00:AA:00:00:00:AA", "00:AA:00:00:04:00" -``` - -#### Setup IPv4 - -For IPv4 header as well, we assign static source and destination IPv4 address value. The IP protocol field is *automatically* set to `0x11` since the next protocol in the stack is UDP. - -```python -# set source and destination IPv4 addresses -ip1.src.value, ip1.dst.value = "10.0.0.1", "10.0.0.2" -ip2.src.value, ip2.dst.value = "10.0.0.2", "10.0.0.1" -``` - -#### Setup UDP - -With UDP header, we'll do something more interesting. Instead of assigning a single (fixed) value for header fields, which we did previously, we'll assign multiple values. - -We can achieve this in snappi by using `increment`, `decrement` and `list` patterns. - -```python -# set incrementing port numbers as source UDP ports -udp1.src_port.increment.start = 5000 -udp1.src_port.increment.step = 2 -udp1.src_port.increment.count = 10 - -udp2.src_port.increment.start = 6000 -udp2.src_port.increment.step = 4 -udp2.src_port.increment.count = 10 - -# assign list of port numbers as destination UDP ports -udp1.dst_port.values = [4000, 4044, 4060, 4074] -udp2.dst_port.values = [8000, 8044, 8060, 8074, 8082, 8084] -``` - -The snippet above will result in a sequence of packets as shown in the figure below. -
-
- -
-
- -> The patterns for headers fields in snappi provide a very flexible way to generate millions of unique packets to test DuT functionalities, like hashing based on 5-tuple. Checkout [common snappi constructs](snappi-constructs.md) for more details. - -### Start Capture and Traffic - -Now that we've added all the intended configuration parameters to `cfg`, we need to: -* Push it to the controller, so that connection with intended traffic-engines can be established and intended configuration is applied (to each one of them). -* Start capturing packets on configured ports -* Start sending packets from configured ports - -Every time `api.set_config()` is called, it essentially resets the state of the controller by **tearing down** any previous connections with traffic-engines and **overriding** any previous configuration. If the call fails at some point, `api.get_config()` will return an empty config. - -```python -# push configuration to controller -api.set_config(cfg) - -# start packet capture on configured ports -cs = api.capture_state() -cs.state = cs.START -api.set_capture_state(cs) - -# start transmitting configured flows -ts = api.transmit_state() -ts.state = ts.START -api.set_transmit_state(ts) -``` - -> Transmit or capture will be started on all configured flows or ports, respectively, unless one provides specific flow or port names. e.g. `cs.port_names = ['p1']`, `ts.flow_names = ['f1']`. - -### Fetch and Validate Metrics - -Since we're sending 1000 packets, at a rate of 1000 packets per second, it should take 1 second for transmit to be complete. We can validate the same using `metrics`. - -The API supports different kinds of metrics, but we'll focus on `port_metrics` which are similar to linux network interface stats. - -```python -# create a port metrics request and filter based on port names -req = api.metrics_request() -req.port.port_names = [p.name for p in cfg.ports] -# include only sent and received packet counts -req.port.column_names = [req.port.FRAMES_TX, req.port.FRAMES_RX] - -# fetch port metrics -res = api.get_metrics(req) - -# calculate total frames sent and received across all configured ports -total_tx = sum([m.frames_tx for m in res.port_metrics]) -total_rx = sum([m.frames_rx for m in res.port_metrics]) -expected = sum([f.duration.fixed_packets.packets for f in cfg.flows]) - -assert expected == total_tx and total_rx >= expected -``` - -> Note, usually this snippet will need to be executed multiple times until the assertion in the end stands true or timeout occurs. We use a function called `wait_for()` in `hello_snappi.py` script to achieve this. - -### Fetch and Validate Captures - -Validation using metrics is limited to counters (e.g. total transmitted, total received). To really inspect each packet received, we can use the capture API. - -This API is a little different from others, in that: -* It returns a sequence of raw bytes (representing `.pcap` file) instead of a JSON string. -* It needs to be fed to a tool that can inspect `.pcap` files. e.g. `dpkt` or `tcpdump` - -This snippet uses `dpkt` to ensure each packet received is a valid UDP packet. - -```python -for p in cfg.ports: - # create capture request and filter based on port name - req = api.capture_request() - req.port_name = p.name - # fetch captured pcap bytes and feed it to pcap parser dpkt - pcap = dpkt.pcap.Reader(api.get_capture(req)) - for _, buf in pcap: - # check if current packet is a valid UDP packet - eth = dpkt.ethernet.Ethernet(buf) - assert isinstance(eth.data.data, dpkt.udp.UDP) -``` - -Optionally following snippet can be used in order to do `tcpdump -r cap.pcap` (inspect captures using tcpdump). - -```python -pcap_bytes = api.get_capture(req) -with open('cap.pcap', 'wb') as p: - p.write(pcap_bytes.read()) -``` - -### Putting It All Together - -`snappi` provides a fair level of abstraction and ease-of-use while constructing traffic configuration compared to doing the [equivalent in JSON](https://github.com/open-traffic-generator/snappi-tests/tree/247fa80/configs/hello_snappi.json). More such comparisons can be found in [common snappi constructs](snappi-constructs.md). - -There's more to snappi than what we've presented here, e.g. per-flow metrics, latency measurements, custom payloads, etc. It will be worthwhile browsing through [snappi-tests](https://github.com/open-traffic-generator/snappi-tests/tree/247fa80) for more such examples, pytest-based test scripts and utilities. diff --git a/docs/index.md b/docs/index.md new file mode 100644 index 00000000..029a22c2 --- /dev/null +++ b/docs/index.md @@ -0,0 +1,52 @@ +

Ixia-c & Elastic Network Generator

+

Agile and composable network test system designed for continuous integration

+ +
+
+
+ +
+
+ +
+
+
+ +## Community Edition + +First in its class [Ixia-c Community Edition](quick-start/introduction.md) of the Elastic Network Generator with **BGP emulation and full set traffic capabilities** [limited by scale and performance](licensing.md) is available to anyone without registration and at no cost. + +## OTG Examples + +Explore [otg-examples](https://otg.dev/examples/otg-examples/) library to get hands-on experience with using Open Traffic Generator and Ixia-c. With a minimum Linux host or VM you can be running your first network data and control plane validation scenarios in minutes. + +## Key Features + +* Software multi-container application: + * runs on Linux x86 compute, + * includes software traffic generation and protocol emulation capabilities, + * built using DPDK to generate high traffic rates on a single CPU core, + * can control Keysight network test hardware. +* Easily integrates into CI/CD pipelines like GitHub, GitLab, Jenkins. +* Supports test frameworks like Pytest or Golang test. +* Emulates key data center protocols with high scale of sessions and routes: + * capable of leveraging 3rd party libraries to add unsupported packet formats, + * provides patterns to modify common packet header fields to generate millions of unique packets. +* Supports: + * configurable frame sizes, + * rate specification in pps (packets per second) or % line-rate, + * ability to send traffic bursts. +* Statistics: + * per port and per flow, + * tracks flows based on common packet header fields, + * one way latency measurements (min, max, average) on a per flow basis, + * capture packets and write to PCAP or analyze in the test logic. + diff --git a/docs/integrated-environments.md b/docs/integrated-environments.md new file mode 100644 index 00000000..b7071ca3 --- /dev/null +++ b/docs/integrated-environments.md @@ -0,0 +1,6 @@ +## Network Topology Emulation + +Ixia-c supports the following modern network emulation software: + +* [Containerlab:](deployments-containerlab.md) Simple yet powerful specialized tool for orchestrating and managing container-based networking labs. +* [OpenConfig KNE:](deployments-kne.md) Kubernetes Network Emulation, which is a Google initiative to develop tooling for quickly setting up topologies of containers running various device OSes. diff --git a/docs/licensing.md b/docs/licensing.md new file mode 100644 index 00000000..bfa1d01a --- /dev/null +++ b/docs/licensing.md @@ -0,0 +1,76 @@ +# Licensing + +## License Editions + +The following License Editions are available for Keysight Elastic Network Generator: + + | Capability | Community | Developer | Team | System | + |-------------------------------------|----------------------|----------------------|--------------------------------|-------------------------------------| + | Ixia-c Traffic Port Capacity | 4 x 1/10GE | 50GE | 400GE | 800GE | + | Test Concurrency | 1 Seat | 1 Seat | 8 Seats | 16 Seats | + | Protocol Scale | Restricted | Limited | Limited | Unlimited | + | Requires a valid license | N | Y | Y | Y | + | Includes Ixia-c Software Test Ports | Y | Y | Y | Y | + | Works with UHD400T Hardware | N | N | Y | Y | + | Works with IxOS Hardware | N | N | N | Y | + + The **Ixia-c Traffic Port Capacity** is determined as a sum of the configured Ixia-c test port speeds with the possible values of: 100GE, 50GE, 40GE, 25GE, 10GE, and 1GE. The Maximum data plane performance of an Ixia-c port may be less than the configured port speed, depending on the capabilities of the underlying hardware and software drivers. + + The **Test seat concurrency** applies to a number of controller instances that are running with a configuration that exceeds the capabilities of the Community Edition. + + The **Restricted** protocol scale supports the maximum of 4 BGP sessions per test. + + The Capabilities of the **Limited** protocol scale depend on the protocol. For details, contact [Keysight Support](https://support.ixiacom.com/contact/support). + + Keysight Elastic Network Generator can simultaneously consume multiple licenses to increase the capabilities of a test. For example, if the Ixia-c Traffic Port Capacity configured in one test is 100GE, two Developer licenses will be consumed if available. + + If you require capabilities beyond those provided by the Community Edition, use [Keysight Elastic Network Generator](https://www.keysight.com/us/en/products/network-test/protocol-load-test/keysight-elastic-network-generator.html) product page to request an evaluation or a quote. + +## License Server + +Keysight uses a license server to manage floating or network shared licenses for its software products. The license server enables licenses to float and not be tied to a specific Elastic Network Generator instance. The Elastic Network Generator controllers must be able to reach the License server. + +In order to use capabilities of Elastic Network Generator that require a valid license, you need to deploy a Keysight License Server. The License Server is a virtual machine and it is distributed as OVA and QCOW2 images (you only need one of them depending on your hypervisor). + +* [QCOW2 image](https://storage.googleapis.com/kt-nas-images-cloud-ist/slum-4.2.0-208.2.qcow2), ~6GB + +To make a decision where to deploy the License Server VM, take into the account the following requirements: + +* For Linux-based QEMU or KVM, use the QCOW2 image +* 2 vCPU cores +* 4GB of RAM +* 100GB storage +* 1 vNIC for network connectivity. Note that DHCP is the preferred option, and this is also how the VM is configured to obtain its IP address. + +Network connectivity requirements for the License Server VM + +1. Internet access from the VM over HTTPS is desirable for online license activation, but not strictly required. Offline activation method is available as well. +2. Access from a user over SSH (TCP/22) for license operations (activation, deactivation, reservation, sync) +3. Access from any `keng-controller` that needs a license during a test run over gRPC (TCP/7443) for license checkout and check-in + +Here is an example of how different components communicate with the License Server: + +![License Server Connectivity](./res/license-server.drawio.svg) + +## Configuring a static IP address + +If your network doesn't provide DHCP, you can configure a static IP address for the License Server VM. Access the VM console and go through two-step login process: +* first prompt: `console` (no password) +* second promt: `admin`/`admin`. Run the following commands to configure a static IP address, where `x.x.x.x` is the IP address, `yy` is the prefix length, `z.z.z.z` is the default gateway, `a.a.a.a` and `b.b.b.b` are DNS servers: + +```Shell +kcos networking ip set mgmt0 x.x.x.x/yy z.z.z.z +kcos networking dns-servers add a.a.a.a b.b.b.b +``` + +## License Activation + +You will now be able to activate licenses and use the License Server on your Elastic Network Generator setup. Go to `https://your-license-server-hostname` to access the application. Enter credentials: `admin`/`admin` to login. + +If you have an activation code, to perform an online activation, click "Activate Licenses", enter the code and click "Activate". For offline mode, choose "Offline Operations" instead. + +You can also use a command-line session, via console or SSH, to perform license operations. Run `kcos licensing --help` to see the list of available commands. + +## Connecting Elastic Network Generator to the License Server + +To connect the Elastic Network Generator controller instance to the License Server, use `--license-servers="server1 server2 server3 server4"` argument when launching the controller. An alternative way is to use an environment variable `LICENSE_SERVERS`. The argument accepts a space-separated list of hostnames or IP addresses of the License Servers, up to four. The controller will try to connect to the License Servers in the order they are specified in the list. If the first License Server is not available, or doesn't have enough available licenses to run the test, the controller will try to connect to the next one in the list. \ No newline at end of file diff --git a/docs/limitations.md b/docs/limitations.md index 163ed366..6f6ed8ae 100644 --- a/docs/limitations.md +++ b/docs/limitations.md @@ -1,7 +1,5 @@ # Limitations -* [Table of Contents](readme.md) - * Supported protocol headers are `ethernet`, `ipv4`, `ipv6`, `vlan`, `tcp`, `udp`, `gtpv1`, `gtpv2`, `arp`, `icmp` and `custom`. * `fixed_packets`, `fixed_seconds`,`continuous` and `burst` are supported for flow duration (fixed number of `burst` is not supported). * Size of the packet must be a value greater than or equal to 64 bytes. diff --git a/docs/prerequisites.md b/docs/prerequisites.md index 7d003597..977a6fe7 100644 --- a/docs/prerequisites.md +++ b/docs/prerequisites.md @@ -1,39 +1,46 @@ # Ixia-c Prerequisites -* [Table of Contents](readme.md) - ## System Prerequisites ### CPU and RAM -- `controller` - each instance requires at least 1 CPU core and 2GB RAM. -- `traffic-engine` - each instance requires 2 dedicated CPU cores and 3GB dedicated RAM (FIXME). +The minimum memory and cpu requirements for a basic use-case are as follows: + +* `keng-controller`: Each instance requires at least 10m CPU core and 25Mi RAM. +* `ixia-c-traffic-engine`: Each instance requires 200m CPU core per test port, plus one shared CPU core and 60Mi RAM. Generic formula for CPU cores is `1 + 2 * number_of_ports`. +* `ixia-c-protocol-engine`: Each instance requires 200m CPU core and 350Mi RAM per port. + +For more granularity on resource requirements for advanced deployments, see [Resource requirements](reference/resource-requirements.md). ### OS and Software -- x86_64 Linux Distribution (Centos 7+ or Ubuntu 18+ have been tested) -- Python 2.7+ or Python 3.6+ -- Docker 19+ (as distributed by https://docs.docker.com/) +* x86_64 Linux Distribution (Centos 7+ or Ubuntu 18+ have been tested) +* Docker 19+ (as distributed by ) +* For test-environment, + * Python 3.6+ (with `pip`) or + * Go 1.17+ ## Software Prerequisites ### Docker +* Docker Engine (Community Edition) + ### Python - - **Please make sure you have `python` and `pip` installed on your system.** +* **Ensure that you have `python` and `pip` installed on your system.** - You may have to use `python3` or `absolute path to python executable` depending on Python Installation on system, instead of `python`. + You may have to use `python3` or `absolute path to python executable` depending on the Python Installation on your system. ```sh python -m pip --help ``` - - Please see [pip installation guide](https://pip.pypa.io/en/stable/installing/), if you don't see a help message. - - **It is recommended that you use a python virtual environment for development.** + If you do not see a help message, see [pip installation guide](https://pip.pypa.io/en/stable/installing/), . - ```sh + * **It is recommended that you use a python virtual environment for development.** + + ```sh python -m pip install --upgrade virtualenv # create virtual environment inside `env/` and activate it. python -m virtualenv env @@ -43,24 +50,25 @@ env\Scripts\activate on Windows ``` -> If you do not wish to activate virtual env, you can use `env/bin/python` (or `env\scripts\python` on Windows) instead of `python`. - +> If you do not want to activate the virtual env, use `env/bin/python` (or `env\scripts\python` on Windows) instead of `python`. ## Network Interface Prerequisites -In order for Ixia-c Traffic Engine to function, several settings need to be tuned on the host system as described below. +In order for `ixia-c-traffic-engine` to function, several settings need to be tuned on the host system. They are as follows: -1. Ensure existing network interfaces are `Up` and have `Promiscuous` mode enabled. +1. Ensure that all the existing network interfaces are `Up` and have `Promiscuous` mode enabled. - ```sh - # check interface details - ip addr - # configure as required - ip link set eth1 up - ip link set eth1 promisc on - ``` +* The following example illustrates a sample configured interface `eth1` + + ```sh + # check interface details + ip addr + # configure as required + ip link set eth1 up + ip link set eth1 promisc on + ``` -2. (Optional) To deploy `traffic-engine` against veth interface pairs, you need to create them as follows: +2. (Optional) You need to create the `veth` interface pairs, to deploy the `ixia-c-traffic-engine` against them. ```sh # create veth pair veth1 and veth2 @@ -68,4 +76,3 @@ In order for Ixia-c Traffic Engine to function, several settings need to be tune ip link set veth1 up ip link set veth2 up ``` - diff --git a/docs/quick-start/deployment.md b/docs/quick-start/deployment.md new file mode 100644 index 00000000..a924fe6c --- /dev/null +++ b/docs/quick-start/deployment.md @@ -0,0 +1,36 @@ +# Deployment + +Ixia-c is distributed and deployed as a multi-container application that consists of the following services: + +* **controller**: Serves API request from the clients and manages workflow across one or more traffic engines. +* **traffic-engine**: Generates, captures, and processes traffic from one or more network interfaces (on linux-based OS). +* **app-usage-reporter**: (Optional) Collects anonymous usage report from the controller and uploads it to the Keysight Cloud, with minimal impact on the host resources. + +All these services are available as docker images on the [GitHub Open-Traffic-Generator repository](https://github.com/orgs/open-traffic-generator/packages). To use specific versions of these images, see [Ixia-c Releases](../releases.md) . + +![ixia-c-aur](../res/ixia-c-aur.drawio.svg "ixia-c-aur") + +> Once the services are deployed, [snappi-tests](https://github.com/open-traffic-generator/snappi-tests/tree/3ffe20f) (a collection of [snappi](https://pypi.org/project/snappi/) test scripts and configurations) can be setup to run against Ixia-c. + +## Deploy Ixia-c using docker-compose + +Deploying multiple services manually (along with the required parameters) is not always applicable in some scenarios. For convenience, the [deployments](../deployments) directory consists of the following `docker-compose` files: + +- `*.yml`: Describes the services for a given scenario and the deployment parameters that are required to start them. +- `.env`: Holds the default parameters, that are used across all `*.yml` files. For example, the name of the interface, the version of docker images, and etc. + +If a concerned `.yml` file does not include certain variables from `.env`, those can then safely be ignored. +Follwoing is the example of a usual workflow, by using `docker-compose`. + +```sh +# change default parameters if needed; e.g. interface name, image version, etc. +vi deployments/.env +# deploy and start services for community users +docker-compose -f deployments/.yml up -d +# stop and remove services deployed +docker-compose -f deployments/.yml down +``` + +On most of the systems, `docker-compose` needs to be installed separately even if the docker is already installed. Before you start, ensure that the [docker prerequisites](../prerequisites.md#docker) are met. + +For more information on deployment, see [Deployment Guide](../deployments.md). diff --git a/docs/quick-start/introduction.md b/docs/quick-start/introduction.md new file mode 100644 index 00000000..b1b93d76 --- /dev/null +++ b/docs/quick-start/introduction.md @@ -0,0 +1,68 @@ +## What is Ixia-c ? + +- A modern, powerful and **API-driven** traffic generator designed to cater to the needs of hyper-scalers, network hardware vendors and hobbyists alike. + +- **Free for basic use-cases** and distributed / deployed as a multi-container application consisting primarily of a [controller](https://github.com/orgs/open-traffic-generator/packages/container/package/keng-controller), a [traffic-engine](https://github.com/orgs/open-traffic-generator/packages/container/package/Ixia-c-traffic-engine) and a [protocol-engine](https://github.com/orgs/open-traffic-generator/packages/container/package/Ixia-c-protocol-engine). + +- As a reference implementation of [Open Traffic Generator API](https://github.com/open-traffic-generator/models), supports client SDKs in various languages, most prevalent being [snappi](https://github.com/open-traffic-generator/snappi) (Python SDK) and [gosnappi](https://github.com/open-traffic-generator/snappi/tree/main/gosnappi). + +

+Ixia-c deployment for two-arm test with DUT +

+ +## Quick Start + +Please ensure that following prerequisites are met by the setup: + +* At least **2 x86_64 CPU cores** and **7GB RAM**, preferably running **Ubuntu 22.04 LTS** OS +* **Python 3.8+** (and **pip**) or **Go 1.19+** +* **Docker Engine** (Community Edition) + + +### 1. Deploy Ixia-c + +```bash +# clone this repository +git clone --recurse-submodules https://github.com/open-traffic-generator/Ixia-c.git && cd Ixia-c + +# create a veth pair and deploy Ixia-c containers where one traffic-engine is bound +# to each interface in the pair, and controller is configured to figure out how to +# talk to those traffic-engine containers +cd conformance && ./do.sh topo new dp +``` + +### 2. Setup and run standalone test using [snappi](https://github.com/open-traffic-generator/snappi) or [gosnappi](https://github.com/open-traffic-generator/snappi/tree/main/gosnappi) + +```bash +# change dir to conformance if you haven't already +cd conformance + +# setup python virtual environment and install dependencies +./do.sh prepytest + +# run standalone snappi test that configures and sends UDP traffic +# upon successful run, flow metrics shall be printed on console +./do.sh pytest examples/test_quickstart.py + +# optionally, go equivalent of the test can be run like so +./do.sh gotest examples/quickstart_test.go +``` + +> Checkout the contents of [test_quickstart.py](https://github.com/open-traffic-generator/conformance/blob/22563e20fe512ef13baf44c1bc69bc59f87f6c25/examples/test_quickstart.py) and equivalent [quickstart_test.go](https://github.com/open-traffic-generator/conformance/blob/22563e20fe512ef13baf44c1bc69bc59f87f6c25/examples/quickstart_test.go) for quick explanation on test steps + +### 3. Optionally, run test using [curl](https://curl.se/) + +We can also pass equivalent **JSON configuration** directly to **controller**, just by using **curl**. +The description of each node in the configuration is detailed in self-updating [online documentation](https://redocly.github.io/redoc/?url=https://raw.githubusercontent.com/open-traffic-generator/models/v0.13.0/artifacts/openapi.yaml). + + +```bash +# push traffic configuration +curl -skL https://localhost:8443/config -H "Content-Type: application/json" -d @conformance/examples/quickstart_config.json + +# start transmitting configured flows +curl -skL https://localhost:8443/control/state -H "Content-Type: application/json" -d @conformance/examples/quickstart_control.json + +# fetch flow metrics +curl -skL https://localhost:8443/monitor/metrics -H "Content-Type: application/json" -d @conformance/examples/quickstart_metrics.json +``` diff --git a/docs/quick-start/sample-test.md b/docs/quick-start/sample-test.md new file mode 100644 index 00000000..0ab1c61b --- /dev/null +++ b/docs/quick-start/sample-test.md @@ -0,0 +1,3 @@ +# Quick start sample test + +How to run the sample test diff --git a/docs/readme.md b/docs/readme.md deleted file mode 100644 index f7203552..00000000 --- a/docs/readme.md +++ /dev/null @@ -1,35 +0,0 @@ -# Table of Contents - -1. [Architecture](architecture.md) -2. [Prerequisites](prerequisites.md) -3. [Deployment Guide](deployments.md) - * [Overview](deployments.md#overview) - * [Bootstrap](deployments.md#bootstrap) - * [Deployment Parameters](deployments.md#deployment-parameters) - * [Diagnostics](deployments.md#diagnostics) - * [Test Suite](deployments.md#test-suite) - * [One-arm Scenario](deployments.md#one-arm-scenario) - * [Two-arm Scenario](deployments.md#two-arm-scenario) - * [Three-arm Mesh Scenario](deployments.md#three-arm-mesh-scenario) -4. [Hello snappi !](hello-snappi.md) - * [Use Case](hello-snappi.md#use-case) - * [Setup](hello-snappi.md#setup) - * [Create API Handle](hello-snappi.md#create-api-handle) - * [Config](hello-snappi.md#config) - * [Ports](hello-snappi.md#ports) - * [Config](hello-snappi.md#config) - * [Layer1](hello-snappi.md#layer1) - * [Capture](hello-snappi.md#capture) - * [Flows](hello-snappi.md#flows) - * [Protocol Headers](hello-snappi.md#protocol-headers) - * [Start Capture and Traffic](hello-snappi.md#start-capture-and-traffic) - * [Fetch and Validate Metrics](hello-snappi.md#fetch-and-validate-metrics) - * [Fetch and Validate Captures](hello-snappi.md#fetch-and-validate-captures) - * [Putting It All Together](hello-snappi.md#putting-it-all-together) -5. [Common snappi constructs](snappi-constructs.md) - * [Overview](snappi-constructs.md#overview) - * [Flows](snappi-constructs.md#flows) - * [Capture](snappi-constructs.md#capture) - * [Metrics](snappi-constructs.md#metrics) -6. [Releases](releases.md) -7. [End User License Agreement](eula.md) \ No newline at end of file diff --git a/docs/reference/capabilities.md b/docs/reference/capabilities.md new file mode 100644 index 00000000..f82c5bbf --- /dev/null +++ b/docs/reference/capabilities.md @@ -0,0 +1,64 @@ +# Supported capabilities + +## Protocol emulation + +| Feature | OTG model specification | Ixia-c software | IxOS hardware | UHD400T system | Comments | +|---|---|---|---|---|---| +| **BGP(v4/v6)** | Y | Y | Y | Y | | +| v4/v6 Routes | Y | Y | Y | Y | | +| Route Withdraw/Re-advertise | Y | Y | Y | Y | | +| Md5 Authentication | Y | Y | Y | Y | | +| Learned Routes Retrieval | Y | Y | Y | Y | | +| Extended Community | Y | Y | Y | Y | | +| Graceful Restart (Helper and Restarting) | | Y | Y | Y | | +| **Static LAG** | Y | Y | Y | N | | +| **LAG with LACP** | Y | Y | Y | N | | +| Protocols/Data over LAG with traffic switch | Y | Y | N | N | | +| **ISIS** | | | | N | | +| v4/v6 Routes | Y | | | N | | +| Learned Routes Retrieval | Y | Y | _N_ | N | | +| Simulated Topology | N | N | N | N | | +| Segment Routing | N | N | N | N | | +| Multiple ports/adjacencies | Y | N | N | N | | +| **RSVP p2p LSPs (Ingress or Egress)** | Y | Y | Y | N | UHD work = MPLS Label insertion in traffic flows. | +| Srefresh and Bundle extensions | Y | Y | Y | N | | +| **LLDP** | Y | Y | N | Y | Should work on UHD as it is. | +| Per Port | Y | Y | N | Y | | +| Learned LLDP Neighbors | Y | Y | N | Y | | +| Per LAG member Port | Y | Y | N | N | | + +## Traffic generation + +| Feature | OTG model specification | Ixia-c software | IxOS hardware | UHD400T system | Comments | +|---|---|---|---|---|---| +| Egress Tracking | Y | Y | Y | N | | +| Imix | Y | Y | Y | N | | +| Dynamic ARP Resolution | Y | Y | Y | Y | | +| Dynamic Frame Size control | Y | Y | Y | N | | +| Dynamic Rate Control | Y | Y | N | N | | +| Multiple Rx Ports and drilldown | Y | Y | Y| N | | +| **Packet headers** | | | | | | +| Vlan | Y | Y | Y | Y | | +| IPv4 | Y | Y | Y | Y | | +| IPv6 | Y | Y | Y | Y | | +| TCP | Y | Y | Y | Y | | +| UDP | Y | Y | Y | Y | | +| MPLS | Y | Y | Y | N | | +| ARP | Y | Y | Y | Y | | +| PPP | Y | Y | Y | N | | +| GRE| Y | Y | Y | N | | +|IGMPv1 | Y | Y | Y | N | | +| ICMP | Y | Y | Y |N | | +| ICMPv6 | Y | Y | Y | N | | +| ETHERNETPAUSE | Y | Y | Y | N | | +| VXLAN | Y | Y | Y | N | | +| PFCPAUSE | Y | N | Y | N | | +| CUSTOM | Y | Y | Y | N | | + +## Infrastructure + +| Feature | OTG model specification | Ixia-c software | IxOS hardware | UHD400T system | Comments | +|---|---|---|---|---|---| +| Capture (Rx only) | Y | Y | Y | N | | +| Link Down/Up | Y | N | Y | N | | +| MTU greater than 1500 | Y (under disc for L1) | N | Y | N | Need to change/fix L1 properties for common script to work with MTU setting. Ixia-c pending controller handling . PE/TE supports MTU changes. | diff --git a/docs/reference/resource-requirements.md b/docs/reference/resource-requirements.md new file mode 100644 index 00000000..04e3787b --- /dev/null +++ b/docs/reference/resource-requirements.md @@ -0,0 +1,47 @@ +# Resource requirement + +The minimum memory and cpu requirements for each Ixia-c components are captured in the following table. Kubernetes metrics server has been used to collect the resource usage data. + +The memory represents the minimum working set of memory required. For protocol and traffic engines, it varies depending on the number of co-located ports. For example, multiple ports are added to a 'group' for LAG use-cases, when a single test container has more than one test NIC connected to the DUT. The figures are in Mi or MB per container and do not include shared or cached memory across multiple containers/pods in a system. + +| Component |1 Port (Default)| 2 Port | 4 Port | 6 Port |8 Port| +|:--- |:--- |:--- |:--- |:--- |:--- | +| Protocol Engine | 350 | 420 | 440 | 460 | 480 | +| Traffic Engine | 60 | 70 | 90 | 110 | 130 | +| Controller | 25* | | | | | +| gNMI | 15* | | | | | + +>Note: Controller and gNMI have a fixed minimum memory requirement and is currently not dependent on number of test ports for the topology. + +The cpu resource figures are in millicores. + +| |Protocol | Traffic Engine | Controller Engine | gNMI | +| :--- | :--- | :--- | :--- | :--- | +| Min CPU | 200 | 200 | 10 | 10 | + +## Minimum and maximum resource usage based on various test configurations + +Depending on the nature of the test run, the memory and cpu resource requirements may vary across all Ixia-c components. The following table captures the memory usage for LAG scenarios with varying numbers of member ports. The minimum value represents the initial memory on topology deployment and the maximum value indicates the peak memory usage during the test run. The values are in Mi or MB. + +| Component | Min/Max | 1 Port | 2 Port | 4 Port | 6 Port | 8 Port | +|:--- |:--- |:--- |:--- |:--- |:--- |:--- | +|Protocol Engine| Max
Min |348
323|423
360|455
360|464
360|492
360| +|Traffic Engine | Max
Min |58
47 | 68
49| 90
49|111
49 |134
49 | +| Controller | Max
Min |21
13 | 21
13 | 23
13 | 24
13 |25
13 | +| gNMI | Max
Min |14
7 | 14
7 | 14
7 | 14
7 | 14
7 | + +Following is the memory usage variation with scaling in the control plane. The variation is on the number of BGP sessions (1K, 5K, and 10K), in a back to back setup. The values are in Mi or MB. + +| Component | Min/Max | 1K | 5K | 10K | +| :---------- | :------- | :------ | :------ | :----- | +| Protocol Engine| Max
Min |516
323|906
323|1367
323| +| Controller | Max
Min |53
12 |149
12 |259
12 | +| gNMI | Max
Min | 7
7 | 7
7 | 7
7 | + +Following is the memory usage variation with scaling in data plane. The variation is on the number of MPLS flows (10, 1K, and 4K), in a back to back setup with labels provided by the RSVP-TE control plane. The values are in Mi or MB. + +| Component | Min/Max | 10 | 1K | 4K | +| :---------- | :------- | :------| :------| :------ | +| Traffic Engine | Max
Min |58
47|59
47|95
47 | +| Controller | Max
Min |18
12|46
12|120
12| +| gNMI | Max
Min |10
7 |17
7 |28
7 | diff --git a/docs/res/UHD100T32.png b/docs/res/UHD100T32.png new file mode 100644 index 00000000..340d01fb Binary files /dev/null and b/docs/res/UHD100T32.png differ diff --git a/docs/res/UHD400T_front_view.png b/docs/res/UHD400T_front_view.png new file mode 100644 index 00000000..843ffe08 Binary files /dev/null and b/docs/res/UHD400T_front_view.png differ diff --git a/docs/res/clearOwnership.PNG b/docs/res/clearOwnership.PNG new file mode 100644 index 00000000..3f0f7bbd Binary files /dev/null and b/docs/res/clearOwnership.PNG differ diff --git a/docs/res/hw-server.drawio b/docs/res/hw-server.drawio new file mode 100644 index 00000000..f8d2fd25 --- /dev/null +++ b/docs/res/hw-server.drawio @@ -0,0 +1,69 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/res/hw-server.drawio.svg b/docs/res/hw-server.drawio.svg new file mode 100644 index 00000000..e6c78e5e --- /dev/null +++ b/docs/res/hw-server.drawio.svg @@ -0,0 +1,203 @@ + + + + + + + +
+
+
+ Test Client +
+
+
+
+ + Test Client + +
+
+ + + + +
+
+
+ gNMI client +
+
+
+
+ + gNMI client + +
+
+ + + + + +
+
+
+ + + keng-layer23-hw-server + + +
+
+
+
+ + keng-layer23-hw-serv... + +
+
+ + + + + + + + +
+
+
+ + keng-controller + +
+
+ + localhost:8443 +
+ localhost:40051 +
+
+
+
+
+ + keng-controller... + +
+
+ + + + +
+
+
+ + + + otg-gnmi-server + + + +
+
+ localhost:50051 +
+
+
+
+ + otg-gnmi-server... + +
+
+ + + + +
+
+
+ + Keysight Ixia Hardware Chassis with IxOS + +
+
+
+
+ + Keysight Ixia Hardware Cha... + +
+
+ + + + + + +
+
+
+ Keysight Elastic Network Generator +
+
+
+
+ + Keysight Elastic Network Generator + +
+
+ + + + + + + + + + +
+
+
+ Open Traffic Generator (OTG) API +
+
+
+
+ + Open Traffic Gener... + +
+
+ + + + +
+
+
+ gNMI +
+
+
+
+ + gNMI + +
+
+
+ + + + + Text is not SVG - cannot display + + + +
\ No newline at end of file diff --git a/docs/res/ixia-c-aur.drawio.svg b/docs/res/ixia-c-aur.drawio.svg index 7aa30641..b76aa5fe 100644 --- a/docs/res/ixia-c-aur.drawio.svg +++ b/docs/res/ixia-c-aur.drawio.svg @@ -1,4 +1,4 @@ - + @@ -56,9 +56,15 @@
+ + + + keng-controller + + + - ixia-c-controller

:8443 @@ -70,7 +76,7 @@
- ixia-c-controller... + keng-controller... @@ -109,7 +115,7 @@
- + Ixia-c Deployment for Bidrectional Traffic
@@ -212,7 +218,7 @@
- + Open Traffic Generator API @@ -232,9 +238,15 @@
+ + + + keng-app-usage-reporter + + + - ixia-c-app-usage-reporter

:5600 @@ -246,7 +258,7 @@
- ixia-c-app-usage-reporter... + keng-app-usage-reporter... @@ -278,7 +290,7 @@ - Viewer does not support full SVG 1.1 + Text is not SVG - cannot display diff --git a/docs/res/license-server.drawio.svg b/docs/res/license-server.drawio.svg new file mode 100644 index 00000000..eb35c7e6 --- /dev/null +++ b/docs/res/license-server.drawio.svg @@ -0,0 +1,314 @@ + + + + + + + + +
+
+
+ OTG Client +
+ + test execution + +
+
+
+
+ + OTG Client... + +
+
+ + + + +
+
+
+ + + ixia-c-controller + +
+ OTG API Endpoint +
+
+
+
+
+
+ + ixia-c-controller... + +
+
+ + + + + + +
+
+
+ + Keysight License Server Connectivity + +
+
+
+
+ + Keysight License Server Connectivity + +
+
+ + + + +
+
+
+ + + Open Traffic Generator API + + +
+
+
+
+ + Open Traffic Ge... + +
+
+ + + + +
+
+
+ License Server VM +
+ + license-srv-ip + +
+
+
+
+ + License Server VM... + +
+
+ + + + + +
+
+
+ grpc:7443 +
+
+
+
+ + grpc:7443 + +
+
+ + + + +
+
+
+ License Administrator +
+ + any ssh client + +
+
+
+
+ + License Administrator... + +
+
+ + + + + +
+
+
+ ssh:22 +
+
+
+
+ + ssh:22 + +
+
+ + + + +
+
+
+ Keysight Software Manager +
+ + ksm.software.keysight.com + +
+
+
+
+ + Keysight Software Manager... + +
+
+ + + + + +
+
+
+ https:443 +
+
+
+
+ + https:443 + +
+
+ + + + +
+
+
+ OTG Client +
+ + test execution + +
+
+
+
+ + OTG Client... + +
+
+ + + + +
+
+
+ + + ixia-c-controller + +
+ OTG API Endpoint +
+
+
+
+
+
+ + ixia-c-controller... + +
+
+ + + + + + +
+
+
+ OTG Client +
+ + test execution VM + +
+
+
+
+ + OTG Client... + +
+
+ + + + +
+
+
+ + + Elastic Network Generator Controller + +
+ + keng-controller + +
+
+
+
+
+
+ + Elastic Network Gener... + +
+
+ + +
+ + + + + Text is not SVG - cannot display + + + +
\ No newline at end of file diff --git a/docs/res/otg-keng-labels-on-white.drawio.svg b/docs/res/otg-keng-labels-on-white.drawio.svg new file mode 100644 index 00000000..06933b31 --- /dev/null +++ b/docs/res/otg-keng-labels-on-white.drawio.svg @@ -0,0 +1,4 @@ + + + +
Ixia-c
Ixia-c
UHD400T
UHD400T
IxOS Hardware
IxOS Hardwa...
Elastic Network Generator
Elast...
Test Program
Test...
OTG
OTG
Open Traffic Generator API
Open Traffic G...
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/res/server-connections.png b/docs/res/server-connections.png new file mode 100644 index 00000000..d7403c8f Binary files /dev/null and b/docs/res/server-connections.png differ diff --git a/docs/res/system-with-UHD400T.drawio.svg b/docs/res/system-with-UHD400T.drawio.svg new file mode 100644 index 00000000..7ed7e5c7 --- /dev/null +++ b/docs/res/system-with-UHD400T.drawio.svg @@ -0,0 +1,4 @@ + + + +
DUT/SUT
DUT/SUT
UHD-400
UHD-400
Port 1-16 VLANs
Port 1-16 VLANs
Protocol Engine
Protocol Engine
Protocol Engine
Protocol Engine
Test Controller
Test Controller
Server
Server
Port 1
Port 1
Port 16
Port 16
Port 1 VLAN
Port 1 VLAN
Port 16 VLAN
Port 16 VLAN
Port 32
Port 32
Example System with UHD400T
Example System with UHD400T
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/res/system_with_UHD400T.drawio b/docs/res/system_with_UHD400T.drawio new file mode 100644 index 00000000..d81d0bf9 --- /dev/null +++ b/docs/res/system_with_UHD400T.drawio @@ -0,0 +1,148 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/res/system_with_UHD400T.svg b/docs/res/system_with_UHD400T.svg new file mode 100644 index 00000000..8e239aac --- /dev/null +++ b/docs/res/system_with_UHD400T.svg @@ -0,0 +1,4 @@ + + + +
DUT/SUT
DUT/SUT
UHD-400
UHD-400
Port 1-16 VLANs
Port 1-16 VLANs
Protocol Engine
Protocol Engine
Protocol Engine
Protocol Engine
Test Controller
Test Controller
Server
Server
Port 1
Port 1
Port 16
Port 16
Port 1 VLAN
Port 1 VLAN
Port 16 VLAN
Port 16 VLAN
Port 32
Port 32
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/res/testbed_connections.svg b/docs/res/testbed_connections.svg new file mode 100644 index 00000000..2268184e --- /dev/null +++ b/docs/res/testbed_connections.svg @@ -0,0 +1,4 @@ + + + +
COTS Server
(HP ProLiant DL360p)
COTS Server...
UHD-400G-T16
UHD-400G-T16
Ixia-c Control Plane
Ixia-c Cont...
3rd party Containers
3rd party C...
3rd party Containers
3rd party C...
OTG Service
OTG Service
Ports
Ports
2
2
3
3
4
4
1
1
32
32
1
1
Ports
Ports
...
...
Text is not SVG - cannot display
\ No newline at end of file diff --git a/docs/res/tests-sdk-ixia-c.drawio.svg b/docs/res/tests-sdk-ixia-c.drawio.svg index d040795e..f21d606e 100644 --- a/docs/res/tests-sdk-ixia-c.drawio.svg +++ b/docs/res/tests-sdk-ixia-c.drawio.svg @@ -38,14 +38,14 @@
- Test Script         Ixia-c Controller + Test Script         KENG controller
- Test Script         Ixia-c Controller + Test Script         KENG controller @@ -60,7 +60,7 @@
- Ixia-c Controller + KENG controller
:8443
@@ -71,7 +71,7 @@
- Ixia-c Controller... + KENG controller... diff --git a/docs/res/topo.png b/docs/res/topo.png new file mode 100644 index 00000000..e5bed7be Binary files /dev/null and b/docs/res/topo.png differ diff --git a/docs/res/uhd-connections.png b/docs/res/uhd-connections.png new file mode 100644 index 00000000..05b3b5f8 Binary files /dev/null and b/docs/res/uhd-connections.png differ diff --git a/docs/sample-scripts.md b/docs/sample-scripts.md new file mode 100644 index 00000000..70bb5901 --- /dev/null +++ b/docs/sample-scripts.md @@ -0,0 +1,830 @@ + +The following text was taken from the UHD400 topic. +## Sample `gosnappi` scripts + +Two sample `gosnappi` scripts can be found in the directory [`gosnappi/`](./gosnappi) of the following git repo: https://gitlab.it.keysight.com/p4isg/uhd-400g-docs. They are also located in the admin shell of the UHD. + +The two sample scripts provided are `uhd_b2b.go` and `uhd_b2b_bgp.go`. + +- `uhd_b2b.go` sends a fixed packet count with incrementing MAC and IP addresses. The script then collects and verifies the flow statistics. +- `uhd_b2b_bgp.go` configures 1 BGP session per port and advertises 2 routes. The script then sends a fixed packet count across those routes. The script finally collects and verifies the flow statistics. + +The scripts' topology assumes a back-to-back connection between odd- and even-numbered ports (for example, 1<-->2, 3<-->4, ..., 15<-->16). + +To build `./build.sh` (Go must be installed): + +```shell +# build +./build.sh + +# Run uhd_b2b +./gosnappi/uhd_b2b -host https:// + +# Run uhd_b2b_bgp +./gosnappi/uhd_b2b_bgp -host https:// +``` + +For information on gosnappi, see https://github.com/open-traffic-generator/snappi/tree/main/gosnappi. + +## Reference + +
+ +Expand this section for sample output of `uhd_b2b` test + +```shell +./gosnappi/uhd_b2b -host https://10.36.79.196 + +2022/02/28 20:17:04 Total ports is 2 +2022/02/28 20:17:04 Creating gosnappi client for gRPC server grpc-service.default.svc.cluster.local:40051 ... +2022/02/28 20:17:04 Connecting to server at https://10.36.79.196 +2022/02/28 20:17:04 Creating port p1 at location uhd://nanite-bfs-v1.nanite-bfs:7531;1 +2022/02/28 20:17:04 Creating port p2 at location uhd://nanite-bfs-v1.nanite-bfs:7531;2 +2022/02/28 20:17:04 Creating flow p1->p2-IPv4 +2022/02/28 20:17:04 Flow p1->p2-IPv4 srcMac 00:11:22:33:44:00 dstMac 00:11:22:33:44:01 +2022/02/28 20:17:04 Flow p1->p2-IPv4 srcIp 10.1.1.1 dstIp 10.1.1.2 +2022/02/28 20:17:04 Creating flow p2->p1-IPv4 +2022/02/28 20:17:04 Flow p2->p1-IPv4 srcMac 00:11:22:33:44:01 dstMac 00:11:22:33:44:00 +2022/02/28 20:17:04 Flow p2->p1-IPv4 srcIp 10.1.1.2 dstIp 10.1.1.1 +2022/02/28 20:17:04 flows: +- duration: + choice: fixed_packets + fixed_packets: + gap: 12 + packets: 5000000 + metrics: + enable: true + loss: false + timestamps: false + name: p1->p2-IPv4 + packet: + - choice: ethernet + ethernet: + dst: + choice: increment + increment: + count: 10000 + start: "00:11:22:33:44:01" + step: "00:00:00:00:01:00" + src: + choice: increment + increment: + count: 10000 + start: "00:11:22:33:44:00" + step: "00:00:00:00:01:00" + - choice: ipv4 + ipv4: + dst: + choice: increment + increment: + count: 10000 + start: 10.1.1.2 + step: 0.1.0.0 + src: + choice: increment + increment: + count: 10000 + start: 10.1.1.1 + step: 0.1.0.0 + rate: + choice: percentage + percentage: 10 + size: + choice: fixed + fixed: 64 + tx_rx: + choice: port + port: + rx_name: p2 + tx_name: p1 +- duration: + choice: fixed_packets + fixed_packets: + gap: 12 + packets: 5000000 + metrics: + enable: true + loss: false + timestamps: false + name: p2->p1-IPv4 + packet: + - choice: ethernet + ethernet: + dst: + choice: increment + increment: + count: 10000 + start: "00:11:22:33:44:00" + step: "00:00:00:00:01:00" + src: + choice: increment + increment: + count: 10000 + start: "00:11:22:33:44:01" + step: "00:00:00:00:01:00" + - choice: ipv4 + ipv4: + dst: + choice: increment + increment: + count: 10000 + start: 10.1.1.1 + step: 0.1.0.0 + src: + choice: increment + increment: + count: 10000 + start: 10.1.1.2 + step: 0.1.0.0 + rate: + choice: percentage + percentage: 10 + size: + choice: fixed + fixed: 64 + tx_rx: + choice: port + port: + rx_name: p1 + tx_name: p2 +layer1: +- mtu: 1500 + name: l1 + port_names: + - p1 + - p2 + promiscuous: true + speed: speed_400_gbps +ports: +- location: uhd://nanite-bfs-v1.nanite-bfs:7531;1 + name: p1 +- location: uhd://nanite-bfs-v1.nanite-bfs:7531;2 + name: p2 + +2022/02/28 20:17:04 Setting Config ... +2022/02/28 20:17:05 api: SetConfig, choice: - took 559 ms +2022/02/28 20:17:05 Setting TransmitState ... +2022/02/28 20:17:06 api: SetTransmitState, choice: start - took 1042 ms +2022/02/28 20:17:06 Waiting for condition to be true ... +2022/02/28 20:17:06 Getting Metrics ... +2022/02/28 20:17:09 api: GetMetrics, choice: flow - took 2990 ms +2022/02/28 20:17:09 api: GetFlowMetrics, choice: - took 2990 ms +2022/02/28 20:17:09 Getting Metrics ... +2022/02/28 20:17:09 api: GetMetrics, choice: port - took 41 ms +2022/02/28 20:17:09 api: GetPortMetrics, choice: - took 41 ms +2022/02/28 20:17:09 + +Port Metrics +----------------------------------------------------------------- +Name Frames Tx Frames Rx +p1 5000000 5000000 +p2 5000000 5000000 +----------------------------------------------------------------- + + +Flow Metrics +-------------------------------------------------- +Name Frames Rx +p1->p2-IPv4 5000000 +p2->p1-IPv4 5000000 +-------------------------------------------------- + + +2022/02/28 20:17:09 Done waiting for condition to be true +2022/02/28 20:17:09 api: WaitFor, choice: condition to be true - took 3031 ms +2022/02/28 20:17:09 Total time is 4.647671319s +2022/02/28 20:17:09 Closing gosnappi client ... +``` + +
+ +
+ +Expand this section for sample output of `uhd_b2b_bgp` test + +```shell +./gosnappi/uhd_b2b_bgp -host https://10.36.79.196 +2022/02/28 20:22:32 Total ports is 2 +2022/02/28 20:22:32 Creating gosnappi client for gRPC server grpc-service.default.svc.cluster.local:40051 ... +2022/02/28 20:22:32 Connecting to server at https://10.36.79.196 +2022/02/28 20:22:32 Creating port p1 at location uhd://nanite-bfs-v1.nanite-bfs:7531;1+r0.rustic.svc.cluster.local:50071 +2022/02/28 20:22:32 Creating port p2 at location uhd://nanite-bfs-v1.nanite-bfs:7531;2+r1.rustic.svc.cluster.local:50071 +2022/02/28 20:22:32 Creating flow p1->p2-IPv4 +2022/02/28 20:22:32 Flow p1->p2-IPv4 srcMac 00:11:22:33:44:00 dstMac 00:11:22:33:44:01 +2022/02/28 20:22:32 Flow p1->p2-IPv4 srcIp 100.1.1.1 dstIp 100.1.1.2 +2022/02/28 20:22:32 Creating flow p2->p1-IPv4 +2022/02/28 20:22:32 Flow p2->p1-IPv4 srcMac 00:11:22:33:44:01 dstMac 00:11:22:33:44:00 +2022/02/28 20:22:32 Flow p2->p1-IPv4 srcIp 100.1.1.2 dstIp 100.1.1.1 +2022/02/28 20:22:32 devices: +- bgp: + ipv4_interfaces: + - ipv4_name: d1ipv4 + peers: + - as_number: 1111 + as_number_width: four + as_type: ebgp + name: BGPv4 Peer p1 + peer_address: 100.1.1.2 + v4_routes: + - addresses: + - address: 11.1.11.0 + count: 2 + prefix: 24 + step: 2 + name: p1d1peer1rrv4 + next_hop_address_type: ipv4 + next_hop_ipv4_address: 0.0.0.0 + next_hop_ipv6_address: ::0 + next_hop_mode: local_ip + router_id: 100.1.1.1 + ethernets: + - ipv4_addresses: + - address: 100.1.1.1 + gateway: 100.1.1.2 + name: d1ipv4 + prefix: 24 + mac: "00:11:22:33:44:00" + mtu: 1500 + name: d1Eth + port_name: p1 + name: d1 +- bgp: + ipv4_interfaces: + - ipv4_name: d2ipv4 + peers: + - as_number: 2222 + as_number_width: four + as_type: ebgp + name: BGPv4 Peer p2 + peer_address: 100.1.1.1 + v4_routes: + - addresses: + - address: 12.1.12.0 + count: 2 + prefix: 24 + step: 2 + name: p2d2peer1rrv4 + next_hop_address_type: ipv4 + next_hop_ipv4_address: 0.0.0.0 + next_hop_ipv6_address: ::0 + next_hop_mode: local_ip + router_id: 100.1.1.2 + ethernets: + - ipv4_addresses: + - address: 100.1.1.2 + gateway: 100.1.1.1 + name: d2ipv4 + prefix: 24 + mac: "00:11:22:33:44:01" + mtu: 1500 + name: d2Eth + port_name: p2 + name: d2 +flows: +- duration: + choice: fixed_packets + fixed_packets: + gap: 12 + packets: 5000000 + metrics: + enable: true + loss: false + timestamps: false + name: p1->p2-IPv4 + packet: + - choice: ethernet + ethernet: + dst: + choice: value + value: "00:11:22:33:44:01" + src: + choice: value + value: "00:11:22:33:44:00" + - choice: ipv4 + ipv4: + dst: + choice: value + value: 100.1.1.2 + src: + choice: value + value: 100.1.1.1 + rate: + choice: percentage + percentage: 10 + size: + choice: fixed + fixed: 64 + tx_rx: + choice: port + port: + rx_name: p2 + tx_name: p1 +- duration: + choice: fixed_packets + fixed_packets: + gap: 12 + packets: 5000000 + metrics: + enable: true + loss: false + timestamps: false + name: p2->p1-IPv4 + packet: + - choice: ethernet + ethernet: + dst: + choice: value + value: "00:11:22:33:44:00" + src: + choice: value + value: "00:11:22:33:44:01" + - choice: ipv4 + ipv4: + dst: + choice: value + value: 100.1.1.1 + src: + choice: value + value: 100.1.1.2 + rate: + choice: percentage + percentage: 10 + size: + choice: fixed + fixed: 64 + tx_rx: + choice: port + port: + rx_name: p1 + tx_name: p2 +layer1: +- mtu: 1500 + name: l1 + port_names: + - p1 + - p2 + promiscuous: true + speed: speed_400_gbps +ports: +- location: uhd://nanite-bfs-v1.nanite-bfs:7531;1+r0.rustic.svc.cluster.local:50071 + name: p1 +- location: uhd://nanite-bfs-v1.nanite-bfs:7531;2+r1.rustic.svc.cluster.local:50071 + name: p2 + +2022/02/28 20:22:32 Setting Config ... +2022/02/28 20:22:33 api: SetConfig, choice: - took 710 ms +2022/02/28 20:22:33 Setting SetProtocolState ... +2022/02/28 20:22:33 api: SetProtocolState, choice: start - took 835 ms +2022/02/28 20:22:33 Waiting for condition to be true ... +2022/02/28 20:22:33 Getting Metrics ... +2022/02/28 20:22:34 api: GetMetrics, choice: bgpv4 - took 68 ms +2022/02/28 20:22:34 api: GetBgpv4Metrics, choice: - took 68 ms +2022/02/28 20:22:34 + +BGPv4 Metrics +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +Name BGPv4 Peer p1 BGPv4 Peer p2 +Session State down down +Session Flaps 0 0 +Routes Advertised 0 0 +Routes Received 0 0 +Route Withdraws Tx 0 0 +Route Withdraws Rx 0 0 +Keepalives Tx 0 0 +Keepalives Rx 0 0 +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + + +2022/02/28 20:22:34 Getting Metrics ... +2022/02/28 20:22:34 api: GetMetrics, choice: bgpv4 - took 40 ms +2022/02/28 20:22:34 api: GetBgpv4Metrics, choice: - took 40 ms +2022/02/28 20:22:34 + +BGPv4 Metrics +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +Name BGPv4 Peer p1 BGPv4 Peer p2 +Session State down down +Session Flaps 0 0 +Routes Advertised 0 0 +Routes Received 0 0 +Route Withdraws Tx 0 0 +Route Withdraws Rx 0 0 +Keepalives Tx 0 0 +Keepalives Rx 0 0 +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + + +2022/02/28 20:22:35 Getting Metrics ... +2022/02/28 20:22:35 api: GetMetrics, choice: bgpv4 - took 40 ms +2022/02/28 20:22:35 api: GetBgpv4Metrics, choice: - took 40 ms +2022/02/28 20:22:35 + +BGPv4 Metrics +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +Name BGPv4 Peer p1 BGPv4 Peer p2 +Session State down down +Session Flaps 0 0 +Routes Advertised 0 0 +Routes Received 0 0 +Route Withdraws Tx 0 0 +Route Withdraws Rx 0 0 +Keepalives Tx 0 0 +Keepalives Rx 0 0 +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + + +2022/02/28 20:22:35 Getting Metrics ... +2022/02/28 20:22:35 api: GetMetrics, choice: bgpv4 - took 43 ms +2022/02/28 20:22:35 api: GetBgpv4Metrics, choice: - took 43 ms +2022/02/28 20:22:35 + +BGPv4 Metrics +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +Name BGPv4 Peer p1 BGPv4 Peer p2 +Session State down down +Session Flaps 0 0 +Routes Advertised 0 0 +Routes Received 0 0 +Route Withdraws Tx 0 0 +Route Withdraws Rx 0 0 +Keepalives Tx 0 0 +Keepalives Rx 0 0 +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + + +2022/02/28 20:22:36 Getting Metrics ... +2022/02/28 20:22:36 api: GetMetrics, choice: bgpv4 - took 38 ms +2022/02/28 20:22:36 api: GetBgpv4Metrics, choice: - took 38 ms +2022/02/28 20:22:36 + +BGPv4 Metrics +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +Name BGPv4 Peer p1 BGPv4 Peer p2 +Session State up up +Session Flaps 0 0 +Routes Advertised 2 2 +Routes Received 2 2 +Route Withdraws Tx 0 0 +Route Withdraws Rx 0 0 +Keepalives Tx 2 2 +Keepalives Rx 2 2 +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + + +2022/02/28 20:22:36 Done waiting for condition to be true +2022/02/28 20:22:36 api: WaitFor, choice: condition to be true - took 2235 ms +2022/02/28 20:22:36 Setting TransmitState ... +2022/02/28 20:22:37 api: SetTransmitState, choice: start - took 953 ms +2022/02/28 20:22:37 Waiting for condition to be true ... +2022/02/28 20:22:37 Getting Metrics ... +2022/02/28 20:22:39 api: GetMetrics, choice: flow - took 2646 ms +2022/02/28 20:22:39 api: GetFlowMetrics, choice: - took 2646 ms +2022/02/28 20:22:39 Getting Metrics ... +2022/02/28 20:22:39 api: GetMetrics, choice: port - took 66 ms +2022/02/28 20:22:39 api: GetPortMetrics, choice: - took 66 ms +2022/02/28 20:22:39 + +Port Metrics +----------------------------------------------------------------- +Name Frames Tx Frames Rx +p1 5000000 5000000 +p2 5000000 5000000 +----------------------------------------------------------------- + + +Flow Metrics +-------------------------------------------------- +Name Frames Rx +p1->p2-IPv4 5000000 +p2->p1-IPv4 5000000 +-------------------------------------------------- + + +2022/02/28 20:22:39 Done waiting for condition to be true +2022/02/28 20:22:39 api: WaitFor, choice: condition to be true - took 2713 ms +2022/02/28 20:22:39 Total time is 7.46707886s +2022/02/28 20:22:39 Setting SetProtocolState ... +2022/02/28 20:22:39 api: SetProtocolState, choice: stop - took 47 ms +2022/02/28 20:22:39 Closing gosnappi client ... +``` + +
+ +
+ +Expand this section for sample output of `test_iperf` test + +```shell +This script will, +1. Load kubeconfig to access UHD cluster +2. Deploy netshoot containers to run as custom service behind UHD Port 1 and 2 +3. Run iperf in those containers and use UHD ports for interface +Press any key to continue... +++ which kubectl ++ '[' '!' -f /usr/local/bin/kubectl ']' ++ export KUBECONFIG=/tmp/uhd400gconfig ++ KUBECONFIG=/tmp/uhd400gconfig ++ kubectl config set-cluster uhd400g --server=https://10.36.79.196:6443 --insecure-skip-tls-verify +Cluster "uhd400g" set. ++ kubectl config set-credentials uhd400g-user --token=eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9nNGFBZkVoU21hcjZuSUY4cEtiTjgxVjJqcm80OWxIU25fUVZ0anpwazQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJydXN0aWMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoici10b2tlbi04djRyNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMjUyNWViYzYtOTBlMi00NWM2LWJhNzgtYTM1YmEwNjZkZmZjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnJ1c3RpYzpyIn0.Z3_U7c2tBWuWCdd8Wns98xZMRprJ0DO91XVlVVRgA5jS-Rcb8jVUej5pOXmvVc8FFj3ZOkggN2rdWDpNKSMDSLRuKeP47B76A0if1sUeci_sUve9ZcDuteS-t60kFOyBZG8YHPDDCArPaQedPoMpB96ekbmhJ5sprxwHKdYqT5Q_AxkoYd_8MWPESXjyxdyL-ogAtLP-KDT82_xxSW_ZMyu1CvjaqIQzNKivPk8BG72ByKjbSFBMV9ZYpFaumzOZUWZcuy_kfJ_k6TMyMCKg9FwUvSYMy39tRIY5rC3h-MTZCBSvlWpYCrlklHHsnR0pdMvtQbZMhXXO_7oMdYe9Eg +User "uhd400g-user" set. ++ kubectl config set-context uhd400g --user=uhd400g-user --cluster=uhd400g --namespace=rustic +Context "uhd400g" modified. ++ kubectl config use-context uhd400g +Switched to context "uhd400g". ++ ./uhdIfMgr -custom -image nicolaka/netshoot:v0.1 -cmd '["/bin/sh"]' -args '["-cx", "sleep inf"]' -port 2 -host 10.36.79.196 +INFO[0000] Trying to connect to gRPC server at 10.36.79.196:443 +INFO[0000] OK! ++ ./uhdIfMgr -custom -image nicolaka/netshoot:v0.1 -cmd '["/bin/sh"]' -args '["-cx", "sleep inf"]' -port 1 -host 10.36.79.196 +INFO[0000] Trying to connect to gRPC server at 10.36.79.196:443 +INFO[0000] OK! ++ sleep 10 ++ kubectl wait --for=condition=available deploy -l cpport.keysight.com=1.0 --timeout=100s +deployment.apps/c0 condition met ++ kubectl wait --for=condition=available deploy -l cpport.keysight.com=2.0 --timeout=100s +deployment.apps/c1 condition met ++ kubectl get pods -l cpport.keysight.com=1.0 +NAME READY STATUS RESTARTS AGE +c0-6fc56dbbbd-nvlsd 1/1 Running 0 11s ++ kubectl get pods -l cpport.keysight.com=2.0 +NAME READY STATUS RESTARTS AGE +c1-dd9b4b99f-qlht2 1/1 Running 0 12s +++ get_pod 1 +++ kubectl get pods -l cpport.keysight.com=1.0 -o 'jsonpath={.items[].metadata.name}' ++ kubectl exec c0-6fc56dbbbd-nvlsd -- /bin/bash -cx 'ip link set eth1 up \ + && ip ad flush eth1 \ + && ip ad ad 5.6.7.8/24 dev eth1 \ + && kill `pidof iperf` || true \ + && iperf -s &' ++ ip link set eth1 up ++ ip ad flush eth1 ++ ip ad ad 5.6.7.8/24 dev eth1 +++ pidof iperf ++ kill +kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] ++ true ++ iperf -s +------------------------------------------------------------ +Server listening on TCP port 5001 +TCP window size: 128 KByte (default) +------------------------------------------------------------ +++ get_pod 2 +++ kubectl get pods -l cpport.keysight.com=2.0 -o 'jsonpath={.items[].metadata.name}' ++ kubectl exec c1-dd9b4b99f-qlht2 -- /bin/bash -cx 'ip link set eth1 up \ + && ip ad flush eth1 \ + && ip ad ad 5.6.7.9/24 dev eth1 \ + && iperf -c 5.6.7.8 -i1 -t30' ++ ip link set eth1 up ++ ip ad flush eth1 ++ ip ad ad 5.6.7.9/24 dev eth1 ++ iperf -c 5.6.7.8 -i1 -t30 +------------------------------------------------------------ +Client connecting to 5.6.7.8, TCP port 5001 +TCP window size: 85.0 KByte (default) +------------------------------------------------------------ +[ 1] local 5.6.7.9 port 38046 connected with 5.6.7.8 port 5001 +[ ID] Interval Transfer Bandwidth +[ 1] 0.00-1.00 sec 161 MBytes 1.35 Gbits/sec +[ 1] 1.00-2.00 sec 214 MBytes 1.80 Gbits/sec +[ 1] 2.00-3.00 sec 215 MBytes 1.80 Gbits/sec +[ 1] 3.00-4.00 sec 193 MBytes 1.62 Gbits/sec +[ 1] 4.00-5.00 sec 206 MBytes 1.72 Gbits/sec +[ 1] 5.00-6.00 sec 201 MBytes 1.69 Gbits/sec +[ 1] 6.00-7.00 sec 213 MBytes 1.78 Gbits/sec +[ 1] 7.00-8.00 sec 220 MBytes 1.84 Gbits/sec +[ 1] 8.00-9.00 sec 204 MBytes 1.71 Gbits/sec +[ 1] 9.00-10.00 sec 210 MBytes 1.76 Gbits/sec +[ 1] 10.00-11.00 sec 211 MBytes 1.77 Gbits/sec +[ 1] 11.00-12.00 sec 201 MBytes 1.69 Gbits/sec +[ 1] 12.00-13.00 sec 220 MBytes 1.85 Gbits/sec +[ 1] 13.00-14.00 sec 197 MBytes 1.65 Gbits/sec +[ 1] 14.00-15.00 sec 200 MBytes 1.68 Gbits/sec +[ 1] 15.00-16.00 sec 213 MBytes 1.79 Gbits/sec +[ 1] 16.00-17.00 sec 228 MBytes 1.92 Gbits/sec +[ 1] 17.00-18.00 sec 223 MBytes 1.87 Gbits/sec +[ 1] 18.00-19.00 sec 222 MBytes 1.86 Gbits/sec +[ 1] 19.00-20.00 sec 197 MBytes 1.65 Gbits/sec +[ 1] 20.00-21.00 sec 215 MBytes 1.80 Gbits/sec +[ 1] 21.00-22.00 sec 202 MBytes 1.69 Gbits/sec +[ 1] 22.00-23.00 sec 220 MBytes 1.84 Gbits/sec +[ 1] 23.00-24.00 sec 199 MBytes 1.67 Gbits/sec +[ 1] 24.00-25.00 sec 209 MBytes 1.75 Gbits/sec +[ 1] 25.00-26.00 sec 211 MBytes 1.77 Gbits/sec +[ 1] 26.00-27.00 sec 195 MBytes 1.64 Gbits/sec +[ 1] 27.00-28.00 sec 205 MBytes 1.72 Gbits/sec +[ 1] 28.00-29.00 sec 210 MBytes 1.76 Gbits/sec +[ 1] 29.00-30.00 sec 194 MBytes 1.63 Gbits/sec +[ 1] 0.00-30.02 sec 6.06 GBytes 1.74 Gbits/sec ++ ./uhdIfMgr -host 10.36.79.196 -port 1 +INFO[0000] Trying to connect to gRPC server at 10.36.79.196:443 +INFO[0000] OK! ++ ./uhdIfMgr -host 10.36.79.196 -port 2 +INFO[0000] Trying to connect to gRPC server at 10.36.79.196:443 +INFO[0000] OK! ++ sleep 10 ++ kubectl wait --for=condition=available deploy -l cpport.keysight.com=1.0 --timeout=100s +deployment.apps/r0 condition met ++ kubectl wait --for=condition=available deploy -l cpport.keysight.com=2.0 --timeout=100s +deployment.apps/r1 condition met +``` + +
+ + +The following test was taken from the IXOS HW topic: + +**Sample Test** + +Before attempting the sample test, the deployment must be bootstrapped and KENG services running as described in the deployment section above. + +The sample test uses 2 back-to-back ports on a chassis and is named `otg-flows.py` in the example shown below. + +1. Use the following commands to set up `virtualenv` for Python: + + `python3 -m venv venv` + + `source venv/bin/activate` + + `pip install -r requirements.txt` + +2. To run flows using the `snappi` script and report port metrics, use: + + `otg-flows.py -m port` + +3. To run flows using the snappi script reporting port flow, use: + + `otg-flows.py -m flow` + +``` +# Sample Test ?otg-flows.py? +#!/usr/bin/env python3 +# Copyright ? 2023 Open Traffic Generator +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included in +# all copies or substantial portions of the Software. +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +# THE SOFTWARE. + +import sys, os +import argparse +import snappi + +def port_metrics_ok(api, req, packets): + res = api.get_metrics(req) + print(res) + if packets == sum([m.frames_tx for m in res.port_metrics]) and packets == sum([m.frames_rx for m in res.port_metrics]): + return True + +def flow_metrics_ok(api, req, packets): + res = api.get_metrics(req) + print(res) + if packets == sum([m.frames_tx for m in res.flow_metrics]) and packets == sum([m.frames_rx for m in res.flow_metrics]): + return True + +def wait_for(func, timeout=15, interval=0.2): + """ + Keeps calling the `func` until it returns true or `timeout` occurs + every `interval` seconds. + """ + import time + + start = time.time() + + while time.time() - start <= timeout: + if func(): + return True + time.sleep(interval) + + print("Timeout occurred !") + return False + +def arg_metric_check(s): + allowed_values = ['port', 'flow'] + if s in allowed_values: + return s + raise argparse.ArgumentTypeError(f"metric has to be one of {allowed_values}") + +def parse_args(): + # Argument parser + parser = argparse.ArgumentParser(description='Run OTG traffic flows') + + # Add arguments to the parser + parser.add_argument('-m', '--metric', required=False, help='metrics to monitor: port | flow', + default='port', + type=arg_metric_check) + # Parse the arguments + return parser.parse_args() + +def main(): + """ + Main function + """ + # Parameters + args = parse_args() + + API = "https://localhost:8443" + #Replace with values matching your setup/equipment. For example, if IxOS management IP is 10.10.10.10 and you need to use ports 14 and 15 in the slot number 2: + # P1_LOCATION ="10.10.10.10;2;14" + # P2_LOCATION ="10.10.10.10;2;15" + + api = snappi.api(location=API, verify=False) + cfg = api.config() + + # config has an attribute called `ports` which holds an iterator of type + # `snappi.PortIter`, where each item is of type `snappi.Port` (p1 and p2) + p1, p2 = cfg.ports.port(name="p1", location=P1_LOCATION).port(name="p2", location=P2_LOCATION) + + # config has an attribute called `flows` which holds an iterator of type + # `snappi.FlowIter`, where each item is of type `snappi.Flow` (f1, f2) + f1, f2 = cfg.flows.flow(name="flow p1->p2").flow(name="flow p2->p1") + + # and assign source and destination ports for each + f1.tx_rx.port.tx_name, f1.tx_rx.port.rx_name = p1.name, p2.name + f2.tx_rx.port.tx_name, f2.tx_rx.port.rx_name = p2.name, p1.name + + # configure packet size, rate and duration for both flows + f1.size.fixed, f2.size.fixed = 128, 256 + for f in cfg.flows: + # send 1000 packets and stop + f.duration.fixed_packets.packets = 1000 + # send 1000 packets per second + f.rate.pps = 1000 + # allow fetching flow metrics + f.metrics.enable = True + + # configure packet with Ethernet, IPv4 and UDP headers for both flows + eth1, ip1, udp1 = f1.packet.ethernet().ipv4().udp() + eth2, ip2, udp2 = f2.packet.ethernet().ipv4().udp() + + # set source and destination MAC addresses + eth1.src.value, eth1.dst.value = "00:AA:00:00:04:00", "00:AA:00:00:00:AA" + eth2.src.value, eth2.dst.value = "00:AA:00:00:00:AA", "00:AA:00:00:04:00" + + # set source and destination IPv4 addresses + ip1.src.value, ip1.dst.value = "10.0.0.1", "10.0.0.2" + ip2.src.value, ip2.dst.value = "10.0.0.2", "10.0.0.1" + + # set incrementing port numbers as source UDP ports + udp1.src_port.increment.start = 5000 + udp1.src_port.increment.step = 2 + udp1.src_port.increment.count = 10 + + udp2.src_port.increment.start = 6000 + udp2.src_port.increment.step = 4 + udp2.src_port.increment.count = 10 + + # assign list of port numbers as destination UDP ports + udp1.dst_port.values = [4000, 4044, 4060, 4074] + udp2.dst_port.values = [8000, 8044, 8060, 8074, 8082, 8084] + + # print resulting otg configuration + print(cfg) + + # push configuration to controller + api.set_config(cfg) + + # start transmitting configured flows + ts = api.control_state() + ts.traffic.flow_transmit.state = snappi.StateTrafficFlowTransmit.START + api.set_control_state(ts) + + # Check if the file argument is provided + if args.metric == 'port': + # create a port metrics request and filter based on port names + req = api.metrics_request() + req.port.port_names = [p.name for p in cfg.ports] + # include only sent and received packet counts + req.port.column_names = [req.port.FRAMES_TX, req.port.FRAMES_RX] + + # fetch port metrics + res = api.get_metrics(req) + + # wait for port metrics to be as expected + expected = sum([f.duration.fixed_packets.packets for f in cfg.flows]) + assert wait_for(lambda: port_metrics_ok(api, req, expected)), "Metrics validation failed!" + + elif args.metric == 'flow': + # create a flow metrics request and filter based on port names + req = api.metrics_request() + req.flow.flow_names = [f.name for f in cfg.flows] + + # fetch metrics + res = api.get_metrics(req) + + # wait for flow metrics to be as expected + expected = sum([f.duration.fixed_packets.packets for f in cfg.flows]) + assert wait_for(lambda: flow_metrics_ok(api, req, expected)), "Metrics validation failed!" + +if __name__ == '__main__': + sys.exit(main()) +``` \ No newline at end of file diff --git a/docs/stylesheets/extra.css b/docs/stylesheets/extra.css new file mode 100644 index 00000000..8a300fed --- /dev/null +++ b/docs/stylesheets/extra.css @@ -0,0 +1,124 @@ +[data-md-color-scheme="ks-light"] { + --ks-color-black: #000000; + --ks-color-dark-red: #871518; + --ks-color-red: #E90029; + --ks-color-dark-gray: #373A36; + --ks-color-medium-gray: #97999B; + --ks-color-gray: #D9D9D6; + --ks-color-light-gray: #EBEBEB; + --ks-color-white: #FFFFFF; + + --ks-color-dark-blue: #071D49; + --ks-color-blue: #426DA9; + --ks-color-teal: #63B1BC; + + --gh-color-code-bg-color: rgb(246,248,250); + --gh-color-code-fg-color: rgb(36,41,47); + --gh-color-code-comment-color: rgb(110,119,129); + --gh-color-code-string-color: rgb(10,48,105); + --gh-color-code-keyword-color: rgb(207,34,46); + --gh-color-code-pretty-color: rgb(130,80,223); + + --md-primary-fg-color: var(--ks-color-black); + --md-primary-fg-color--light: var(--ks-color-light-gray); + --md-primary-fg-color--dark: var(--ks-color-dark-gray); + --md-primary-bg-color: var(--ks-color-white); + + --md-default-bg-color: var(--ks-color-white); + --md-default-fg-color--light: var(--ks-color-dark-gray); + --md-default-fg-color: var(--ks-color-black); + --md-default-fg-color--dark: var(--ks-color-black); + + --md-typeset-color: var(--ks-color-dark-gray); + --md-typeset-a-color: var(--ks-color-red); + + --md-code-bg-color: var(--gh-color-code-bg-color); + --md-code-fg-color: var(--gh-color-code-fg-color); + --md-code-hl-comment-color: var(--gh-color-code-comment-color); + --md-code-hl-variable-color: var(--gh-color-code-pretty-color); + --md-code-hl-name-color: var(--gh-color-code-pretty-color); + --md-code-hl-number-color: var(--gh-color-code-string-color); + --md-code-hl-string-color: var(--gh-color-code-string-color); + --md-code-hl-special-color: var(--gh-color-code-fg-color); + --md-code-hl-operator-color: var(--gh-color-code-string-color); + --md-code-hl-punctuation-color: var(--gh-color-code-fg-color); + --md-code-hl-keyword-color: var(--gh-color-code-keyword-color); + --md-code-hl-function-color: var(--gh-color-code-pretty-color); + --md-code-hl-constant-color: var(--gh-color-code-pretty-color); + + --md-admonition-bg-color: var(--gh-color-code-bg-color); + --md-admonition-fg-color: var(--md-default-fg-color--light); +} +[data-md-color-scheme="ks-dark"] { + --ks-color-black: #000000; + --ks-color-dark-red: #871518; + --ks-color-red: #E90029; + --ks-color-dark-gray: #373A36; + --ks-color-medium-gray: #97999B; + --ks-color-gray: #D9D9D6; + --ks-color-light-gray: #EBEBEB; + --ks-color-white: #FFFFFF; + + --ks-color-dark-blue: #071D49; + --ks-color-blue: #426DA9; + --ks-color-teal: #63B1BC; + + --gh-color-code-bg-color: rgb(22,27,24); + --gh-color-code-fg-color: rgb(201,209,207); + --gh-color-code-comment-color: rgb(139,148,158); + --gh-color-code-string-color: rgb(165,214,255); + --gh-color-code-keyword-color: rgb(255,123,114); + --gh-color-code-pretty-color: rgb(210,168,255); + + --md-primary-fg-color: var(--ks-color-black); + --md-primary-fg-color--dark: var(--ks-color-dark-gray); + --md-primary-fg-color--light: var(--ks-color-medium-gray); + --md-primary-bg-color: var(--ks-color-white); + --md-primary-bg-color--light: var(--ks-color-light-gray); + + --md-default-bg-color--light: var(--ks-color-white); + --md-default-bg-color--lighter: var(--ks-color-white); + --md-default-bg-color--lightest: var(--ks-color-white); + --md-default-fg-color: var(--ks-color-white); + --md-default-fg-color--lightest: var(--ks-color-blue); + --md-default-fg-color--light: var(--ks-color-light-gray); + --md-default-fg-color--lighter: var(--ks-color-light-gray); + --md-default-fg-color--dark: var(--ks-color-medium-gray); + --md-default-bg-color: var(--ks-color-black); + + --md-typeset-color: var(--ks-color-gray); + --md-typeset-a-color: var(--ks-color-red); + + --md-code-bg-color: var(--gh-color-code-bg-color); + --md-code-fg-color: var(--gh-color-code-fg-color); + --md-code-hl-comment-color: var(--gh-color-code-comment-color); + --md-code-hl-variable-color: var(--gh-color-code-pretty-color); + --md-code-hl-name-color: var(--gh-color-code-pretty-color); + --md-code-hl-number-color: var(--gh-color-code-string-color); + --md-code-hl-string-color: var(--gh-color-code-string-color); + --md-code-hl-special-color: var(--gh-color-code-fg-color); + --md-code-hl-operator-color: var(--gh-color-code-string-color); + --md-code-hl-punctuation-color: var(--gh-color-code-fg-color); + --md-code-hl-keyword-color: var(--gh-color-code-keyword-color); + --md-code-hl-function-color: var(--gh-color-code-pretty-color); + --md-code-hl-constant-color: var(--gh-color-code-pretty-color); + + --md-admonition-bg-color: var(--md-default-bg-color); + --md-admonition-fg-color: var(--md-default-fg-color--light); + + --md-accent-fg-color: var(--ks-color-light-gray); + --md-accent-fg-color--transparent: var(--ks-color-dark-gray); +} + +.md-grid { + max-width: 1440px; +} + +.container { + display: flex; +} + +.column { + flex: 1; + padding: 20px; +} diff --git a/docs/tests-chassis-app.md b/docs/tests-chassis-app.md new file mode 100644 index 00000000..fe2fa944 --- /dev/null +++ b/docs/tests-chassis-app.md @@ -0,0 +1,154 @@ +# Ixia Chassis/Appliances + This section describes how to use KENG with Keysight's Ixia hardware chassis. + +**Prerequisites** + +To run KENG tests with Ixia hardware, the following pre-requisites must be satisfied: + +- You must have access to Keysight Elastic Network Generator (KENG) images and a valid KENG license. +- For information on how to deploy and activate a KENG license, see the Licensing section of the User Guide. +- The test hardware must be Keysight Ixia Novus or AresOne [Network Test Hardware](https://www.keysight.com/us/en/products/network-test/network-test-hardware.html) with [IxOS](https://support.ixiacom.com/ixos-software-downloads-documentation) 9.20 Patch 4 or higher. +**NOTE:** Currently, only Linux-based IxOS platforms are supported with KENG. +- There must be physical link connectivity between the test ports on the Keysight Ixia Chassis and the devices under test (DUTs). +- You must have a Linux host or virtual machine (VM) with sudo permissions and Docker support. + + Below is an example of deploying an Ubuntu VM otg using [multipass](https://multipass.run/). You can deploy using the means that you are most familiar with. + + `multipass launch 22.04 -n otg -c4 -m8G -d32G` + + `multipass shell otg` + +- [Docker](https://docs.docker.com/engine/install/ubuntu/). Depending on your Linux distribution, follow the steps outlined at one of the following URLs: + - [Ubuntu](https://docs.docker.com/engine/install/ubuntu/) + - [Debian](https://docs.docker.com/engine/install/debian/) + - [CentOS](https://docs.docker.com/engine/install/centos/) + + After docker is installed, add the current user to the docker group: + + `sudo usermod -aG docker $USER` + +- Python3 (version 3.9 or higher), pip and virtualenv + + Use the following command to install Python, pip, and virtualenv: + + `sudo apt install python3 python3-pip python3.10-venv -y` + +- [Go](https://go.dev/dl/) version 1.19 or later, if gRPC or gNMI API access is needed. + + Use the following command to install Go: + + `sudo snap install go --channel=1.19/stable --classic` + +- git and envsubst commands (typically installed by default) + + Use the following command to install git and envsubst if they are not already installed: + + `sudo apt install git gettext-base -y` + +**Deployment Layout** + +The image below shows a complete topology of a KENG test environment. + +To run tests with KENG, the tests must be written using the OpenTrafficGenerator (OTG) API. + +Telemetry is also supported using gNMI APIs. + +If KENG is deployed successfully, the services shown in the block labeled 'Keysight Elastic Network Generator' will be running. + +KENG services interact with the Keysight Ixia hardware chassis to configure protocols and data traffic. + +![ ](res/hw-server.drawio.svg) + +**Deploying KENG** + +The Docker Compose tool provides a convenient way to deploy KENG services. + +Tests cannot be run until KENG services are deployed and running. + +The following procedure shows an example of how to deploy using Docker Compose. + + +1. Copy the contents shown below into a `compose.yaml` file. + + + +``` +services: + keng-controller: + image: ghcr.io/open-traffic-generator/keng-controller:0.1.0-53 + restart: always + depends_on: + keng-layer23-hw-server: + condition: service_started + command: + - "--accept-eula" + - "--debug" + - "--keng-layer23-hw-server" + - "keng-layer23-hw-server:5001" + ports: + - "40051:40051" +logging: + driver: "local" + options: + max-size: "100m" + max-file: "10" + mode: "non-blocking" + keng-layer23-hw-server: + image: ghcr.io/open-traffic-generator/keng-layer23-hw-server:0.13.0-6 + restart: always + command: + - "dotnet" + - "otg-ixhw.dll" + - "--trace" + - "--log-level" + - "trace" + logging: + driver: "local" + options: + max-size: "100m" + max-file: "10" + mode: "non-blocking" + otg-gnmi-server: + image: ghcr.io/open-traffic-generator/otg-gnmi-server:1.13.0 + restart: always + depends_on: + keng-controller: + condition: service_started + command: + - "-http-server" + - "https://keng-controller:8443" + - "--debug" + ports: + - "50051:50051" +logging: + driver: "local" + options: + max-size: "100m" + max-file: "10" + mode: "non-blocking" +``` + +2. Start the Compose tool: + + `docker compose up -d` + + +3. Use the `docker ps` command to verify that KENG services are running: + + `docker ps` + +The list of containers should include: +- `keng-controller` +- `keng-layer23-hw-server` +- `otg-gnmi-server` (optional if gNMI access is needed) + +When the controller and ixhw-server services are running, the deployment is ready to run a test. + +**Test port references** + +KENG uses '/config.ports.locations' parameter to determine the test ports involved in the test. + +The ‘/config.ports.locations’ parameter needs to be set to reference a test port. + +This parameter is to be specified in a ‘chassis ip;card;port’ format. + diff --git a/docs/tests-ixia-c.md b/docs/tests-ixia-c.md new file mode 100644 index 00000000..3ce82c44 --- /dev/null +++ b/docs/tests-ixia-c.md @@ -0,0 +1,3 @@ +# Ixia-c tests + +How to run tests with Ixia-c diff --git a/docs/tests-uhd400.md b/docs/tests-uhd400.md new file mode 100644 index 00000000..c44da8cc --- /dev/null +++ b/docs/tests-uhd400.md @@ -0,0 +1,45 @@ +# Introduction + +The UHD400T is a high performance, ultra-high density, and highly flexible software defined Tester, for all your next +generation testing needs. It works seamlessly with diverse testbeds like a single Device Under Test, or a network +with multiple devices. + +The UHD400T comes as a 1U rack mount appliance with 16 400GE QSFP-DD ports that provide up to 6.4Tbps +composite throughput. + +The UHD400T is configurable via the Keysight Elastic Network Generator. During the setup phase, the physical ports +on the UHD400T can be configured through a REST API. + +![UHD](res/UHD400T_front_view.png "UHD400T front view") + +## VLAN-Port Mapping + +The UHD400T fabric is preconfigured to route the traffic between the trunk port (port 32) and the traffic ports (1-16). +Ports 17-31 are not available for use in the current release. +When the packets arrive at a traffic port, they are encapsulated in a VLAN corresponding to the front panel (see mapping table below) and routed to the trunk port. The process is reversed, when the packets arrive at the trunk port. + +The packets that are encapsulated in a VLAN, are routed to the front panel port corresponding to the VLAN. +The trunk packets that are not VLAN-encapsulated or have a VLAN that is not listed in the following mapping table, will be dropped. + +![UHD400T](res/system-with-UHD400T.drawio.svg "Example System with UHD400T") + +### Mapping Table + +| UHD Port | VLAN ID | UHD Port | VLAN ID | +|:--- |:--- |:--- |:--- | +| 1 | 136 | 9 | 320 | +| 2 | 144 | 10 | 312 | +| 3 | 152 | 11 | 304 | +| 4 | 160 | 12 | 296 | +| 5 | 168 | 13 | 288 | +| 6 | 176 | 14 | 280 | +| 7 | 184 | 15 | 272 | +| 8 | 192 | 16 | 264 | + +>Note: +The VLAN tagged interfaces can be created by using the following linux command: + +```bash +ip link add link name . type vlan id +``` +For more information, see [UHD400T Getting Started Guide](https://downloads.ixiacom.com/support/downloads_and_updates/public/UHD400T/1.0/1.0.20/UHD400T%20Getting%20Started%20Guide.pdf). diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md new file mode 100644 index 00000000..bd84f495 --- /dev/null +++ b/docs/troubleshooting.md @@ -0,0 +1,85 @@ +# Troubleshooting + +This section explains the troubleshooting scenarios for different environments. + +## OTG hardware environment + +**The test fails while it is configuring OTG ports**: This situation may arise for various reasons. For example, the port ownership is not cleared properly by the previous test, the OTG port went to a bad state, and etc. The course of action in such scenarios can be as follows: + +* Manually clear the ownership of the port. +* Reboot the chassis ports. +* Restart the docker containers. +* Use `docker compose` or `docker-compose` to turn the containers down and up. +* Execute the following commands from the directory where you have kept the docker-compose.yaml file. + +```sh +docker-compose down +``` + +```sh +docker-compose up -d +``` + +**Configuration is failing port-speed mismatch**: In this scenario, the OTG port configuration will also fail due to the speed mismatch between the DUT port and the chassis port. +To fix this error, do the following: + +* Adjust the DUT port speed to the default port speed of the chassis port. +* Reboot the chassis ports. +* Execute the test. + +**Test failed to take port ownership**: This error is often obvious from the message that is displayed on the console "Failed to take ownership of the following ports". This situation may occur if the previous test did not clear the ownership or someone else is already owning the port. You can go to the chassis UI and clear the port ownership manually by force. + +![clearOwnership](res/clearOwnership.PNG "Clear port ownership") + +Execute the actions in the following order: + +* Clear ownership +* Reboot ports + +**Error while starting the protocols**: This error can occur if the ports are in a bad state or if you have ignored some errors that have occurred earlier, when you started the protocol engine. +The error messages may look like: + +* Error occurred while starting protocol on the protocol ports: + Unable to find type: + `Ixia.Aptixia.Cpf.pcpu.IsisSRGBRangeSubObjectsPCPU` + +* Error occurred while starting protocol on the protocol ports: `GetPortSession()` is NULL. + +In this situation, a quick solution is to reboot the ports and restart the docker containers, by following the steps that are described earlier. + +>Note: +In summary, clearing the ownership, rebooting the ports, and restarting the containers may resolve many of your problems regarding the ATE port configuration error. + +**OTG API call failed similarly like the start protocol, due to "context deadline exceeded" error**: You can increase the timeout deadline by changing the value of the **timeout** parameter of the ATE in the binding file. The default value is 30 (in seconds). You can increase it as per your setup. + +```sh + # This option specific to OTG over Ixia-HW. + otg { + target: "127.0.0.1:40051" # Change this to the Ixia-c-grpc server endpoint. + insecure: true + timeout: 120 + } +``` + +>Note: +After this change, do not forget to restart the containers and reboot the hardware ports. + +## KNE environment + +**Topology creation failures for Ixia-C pods**: This error can occur for multiple reasons: + +* A mismatch in the Ixia-c build versions and the older Operator that is in use. To deploy the correct versions as per the releases, see "". +* The minimum resource requirement is not met. +* An older version of KNE is being used in the client. To update KNE to a newer release, see "" and deploy the topology. + +**Test fails due to timeout**: This error occurs when the test has faced a timeout. By default, the timeout is 10m. You can increase this limit to "-timeout 20m" or can ensure that all the services are reachable for the test to connect and run. + +**Test fails at set config**: This error occurs if the configuration is not proper. For example, mistake in the flow configuration, BGP LI flag is not enabled but `GetStates` is called, and etc. You can correct the configuration and run the test again. + +## UHD environment + +**Test may not run**: This error can occur for multiple reasons: + +* A mismatch in the version of the rustic containers and the controller that is in use. Ensure that they are compatible. +* The Rustic containers (that are deployed) may not be reachable. There are rare cases when you observe that even if the container is running, the exposed port may have gone corrupt. In such scenarios, the only solution is to redeploy the docker containers. +* The UHD ports may not be responsive. When the rustic container is ready, ensure that the UHD ports are up. For this, refer to the port-api-service that is provided in the [UHD docs](tests-uhd400.md#vlan-port-mapping). diff --git a/docs/user-guide-introduction.md b/docs/user-guide-introduction.md new file mode 100644 index 00000000..aeae5bcd --- /dev/null +++ b/docs/user-guide-introduction.md @@ -0,0 +1,47 @@ +# Introduction +[Keysight Elastic Network Generator](https://www.keysight.com/us/en/products/network-test/protocol-load-test/keysight-elastic-network-generator.html) is an agile, lightweight, and composable network test software designed for Continuous Integration. It supports vendor neutral Open Traffic Generator models and APIs, integrates with several network emulation platforms, and drives a range of Keysight’s Network Infrastructure Test software products, hardware load modules and appliances. + +The Elastic Network Generator software runs in Docker-based containerized environments and emulates key data center control plane protocols while also sending data plane traffic. It has a modern architecture based on micro-services and open-source interfaces and is designed for very fast automated test scenario execution. All of these characteristics enable robust validation of data center networks to deliver top quality of experience. + +## Components + +Keysight Elastic Network Generator provides an abstraction over various test port implementations – +Ixia-c software, UHD400T white-box and purpose-built IxOS hardware. A test program written in Open Traffic Generator API can be run using any of the supported test port types without modifications. + +![Test Port Abstraction via OTG](res/otg-keng-labels-on-white.drawio.svg) + +The main components of KENG are: + +| Component | Description | +| ------------- | ------------- | +| [Test program](https://otg.dev/clients/) | Script or other executable that contains the code that defines the test processes. | +| [OTG](https://otg.dev) | Open Traffic Generator, an evolving API specification that defines the components of a traffic generator such as: test ports (virtual or physical), emulated devices, traffic flows, and statistics and capture capability. | +| [Elastic Network Generator](https://www.keysight.com/us/en/products/network-test/protocol-load-test/keysight-elastic-network-generator.html) | Controller that manages the flow of commands from the test program to the traffic generation device (virtual or physical) and the flow of results from the device to the test program. | +| [Ixia-c](tests-ixia-c.md) | Containerized software traffic generator. | +| [UHD400T](tests-uhd400.md) | Composable test ports based on line-rate white-box switch hardware traffic generator and Ixia-c protocol emulation software. | +| [IxOS Hardware](tests-chassis-app.md) | Keysight Novus or AresONE high-performance network test hardware running IxOS. | + +## Clients + +To successfully use an OTG-based Traffic Generator, you need to be able to execute the following tasks over the OTG API: + +* Prepare a Configuration and apply it to a Traffic Generator +* Control states of the configured objects like Protocols or Traffic Flows +* Collect and analyze Metrics reported by the Traffic Generator + +It is a job of an OTG Client to perform these tasks by communicating with a Traffic Generator via the OTG API. There are different types of such clients, and the choice between them depends on how and where you want to use a Traffic Generator. + +There are multiple ways to communicate with KENG through the OTG API: + +| Method | Description | +| ------------- | ------------- | +| otgen | A command-line tool that is an easy way to get started | +| snappi | A library that makes it easy create test programs in Python or Go | +| direct REST or gRPC calls | An alternative to using snappi | +| custom OTG client | Custom OTG client applications | + + +## OTG Examples + +[OTG examples](https://github.com/open-traffic-generator/otg-examples) repository is a great way to get started with [Open Traffic Generator API](https://otg.dev/). It features a collection of software-only network labs ranging from very simple to more complex. To setup the network labs in software, use the containerized or virtualized NOS images. + diff --git a/mkdocs.md b/mkdocs.md new file mode 100644 index 00000000..95518816 --- /dev/null +++ b/mkdocs.md @@ -0,0 +1,63 @@ +# MkDocs How To + +This repo contains the content for the [Ixia-c.dev](https://ixia-c.dev/) web-site. It is built using the [Material](https://squidfunk.github.io/mkdocs-material/getting-started/) theme for [MkDocs](https://www.mkdocs.org/). + +## Prerequisites + +* Python 3.7+. In the commands below we assume use have `python3` executable. If you have a different name, change accordingly. + +* PIP + + ```Shell + curl -sL https://bootstrap.pypa.io/get-pip.py | python3 - + ``` + +* Virtualenv (recommended) + + ```Shell + pip install virtualenv + ``` + +## How to install + +1. Clone this repository and create Python virtual environment + + ```Shell + git clone https://github.com/open-traffic-generator/ixia-c.git --recursive + cd ixia-c + git checkout mkdocs + python3 -m venv venv + source venv/bin/activate + ``` + +2. Install required modules + + ```Shell + pip3 install -r requirements.txt + ``` + +Update contents in the `docs` directory and verify locally prior to pushing to main branch of this repo on GitHub. Site will automatically update. + +## How to verify contents locally + +1. Run the following command to render the content in real-time via a local web server: + + ```sh + mkdocs serve + ``` + +2. Alternatively, you can render all static html content in the `site` directory: + + ```sh + mkdocs build + ``` + + You can point your browser to `index.html` in the `site` directory to view it. + +## Submodules + +Parts of the `docs` hierarchy are coming from submodules. To update content of the submodules to the most recent one, use: + +```Shell +git submodule update --remote +``` \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml new file mode 100644 index 00000000..cab78cce --- /dev/null +++ b/mkdocs.yml @@ -0,0 +1,86 @@ +site_name: Ixia-c & Elastic Network Generator Documentation +repo_url: https://github.com/open-traffic-generator/ixia-c +repo_name: ixia-c +edit_uri: "" +theme: + name: material + logo: assets/logo.png + favicon: assets/favicon.png + font: + text: Arial + palette: + # Palette toggle for light mode + - scheme: ks-light + media: "(prefers-color-scheme: light)" + toggle: + icon: material/toggle-switch + name: Switch to dark mode + + # Palette toggle for dark mode + - scheme: ks-dark + media: "(prefers-color-scheme: dark)" + toggle: + icon: material/toggle-switch-off-outline + name: Switch to light mode + features: + - navigation.tabs + - navigation.tabs.sticky + - navigation.instant + - navigation.tracking + - navigation.top +extra_css: + - stylesheets/extra.css +markdown_extensions: + - attr_list + - tables + - admonition + - pymdownx.details + - pymdownx.highlight: + anchor_linenums: true + - pymdownx.inlinehilite + - pymdownx.snippets + - pymdownx.superfences +extra: + analytics: + provider: google + property: G-L42048L6R5 + social: + - icon: fontawesome/brands/github + link: https://github.com/open-traffic-generator + - icon: fontawesome/brands/slack + link: https://ixia-c.slack.com +# Page tree +nav: + - Home: + - Overview: index.md # Needs re-writing as a landing page +# - FAQ: faq.md # Review if there are any valuable items left + - EULA: eula.md # Check / update + - Quick Start: + - Introduction: quick-start/introduction.md # Now goes as Ixia-c Quick Start. Add a paragraph why +# - Deployment: quick-start/deployment.md # Link to deployment dir is broken. Need a compose file in text. Need to move to docker compose later +# - Sample test: quick-start/sample-test.md # Empty - otgen example or hello snappi?. See "clients" in UG Intro + - First script: developer/hello-snappi.md # Took from Developer Guide + - User Guide: + - Introduction: user-guide-introduction.md # Update images. Ixia-c link leads to empty page + - Prerequisites: prerequisites.md # Make clear when you need Python. Network Int Prereqs - do we need this at all? + - Deployment: + - Ixia-c: deployments.md # All deployments are via links, easy to miss. Do we make this top-level for Ixia-c/UHD/HW? + - UHD400T: tests-uhd400.md # This is UHD400 intro. Not deployment, not scripts. Need to align with HW + - IxOS Hardware: tests-chassis-app.md # This this HW deployment, there is no scripts (and likely should not be) +# - Use cases: usecases.md # Remove for now, rethink what is this later + - Integrations: integrated-environments.md # A strange mix of otg-examples and clab/kne. Move Clab/KNE deployments here + - Limitations: limitations.md # Align with the DS, clarify if this is for Ixia-c/UHD/HW + - Troubleshooting: troubleshooting.md # Move how to see logs from Deployment here + - Licensing: licensing.md # Update from otg-examples + - Developer Guide: # This section is not ready yet for publishing +# - Introduction: developer/introduction.md # Take from otg.dev? + - Python with snappi: developer/snappi-constructs.md # Cleanup TBD at the end + - Go with gosnappi: developer/snappi-install.md # Split into python and go. Rename as sample scripts? + - Contributing: contribute.md # Move to Developer guide + - Reference Guide: + - Capabilities: reference/capabilities.md + - Resource requirements: reference/resource-requirements.md + - Releases: + - Releases: releases.md + - Announcements: news.md # Check if autogenerated. If yes, see how to merge with releases + - Support: support.md diff --git a/readme.md b/readme.md index 99f80e99..c0712904 100644 --- a/readme.md +++ b/readme.md @@ -52,7 +52,7 @@ Please ensure that following prerequisites are met by the setup: #### 1. Deploy Ixia-C ```bash -# clone this repository +# clone this repository git clone --recurse-submodules https://github.com/open-traffic-generator/ixia-c.git && cd ixia-c # create a veth pair and deploy ixia-c containers where one traffic-engine is bound @@ -82,7 +82,7 @@ cd conformance #### 3. Optionally, run test using [curl](https://curl.se/) -We can also pass equivalent **JSON configuration** directly to **controller**, just by using **curl**. +We can also pass equivalent **JSON configuration** directly to **controller**, just by using **curl**. The description of each node in the configuration is detailed in self-updating [online documentation](https://redocly.github.io/redoc/?url=https://raw.githubusercontent.com/open-traffic-generator/models/v0.13.0/artifacts/openapi.yaml). @@ -117,7 +117,7 @@ curl -skL https://localhost:8443/monitor/metrics -H "Content-Type: application/j * Rate specification in pps (packets per second) or % line-rate * Ability to send bursts * Statistics - * Per port and per flow + * Per-port and per-flow * One way latency measurements (min, max, average) on a per flow basis * Capture * Packets with filters diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 00000000..898468cb --- /dev/null +++ b/requirements.txt @@ -0,0 +1 @@ +mkdocs-material \ No newline at end of file