Skip to content

metal3-io/metal3-dev-env

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Metal³ Development Environment

This repository includes scripts to set up a Metal³ development environment.

Build Status

Ubuntu dev env integration main build status CentOS dev env integration main build status

Instructions

Instructions can be found here: https://book.metal3.io/developer_environment/tryit

Quickstart

Version v1beta1 is later referred as v1betaX.

The v1betaX deployment can be done with Ubuntu 18.04, 20.04, 22.04 or Centos 9 Stream target host images. By default, for Ubuntu based target hosts we are using Ubuntu 22.04

Requirements

Dev env size

The requirements for the dev env machine are, when deploying Ubuntu target hosts:

  • 8GB of memory
  • 4 cpus

And when deploying Centos target hosts:

  • 16GB of memory
  • 4 cpus

The Minikube machine is deployed with 4GB of RAM, and 2 vCPUs, and the target hosts with 4 vCPUs and 4GB of RAM.

Environment variables

export CAPM3_VERSION=v1beta1
export CAPI_VERSION=v1beta1

The following environment variables need to be set for Centos:

export IMAGE_OS=centos

And the following environment variables need to be set for Ubuntu:

export IMAGE_OS=ubuntu

And the following environment variables need to be set for Flatcar:

export IMAGE_OS=flatcar

By default the virtualization hypervisor used is kvm. To be able to use it the nested virtualization needs to be enabled in the host. In case kvm or nested virtualization are not available it is possible to switch to qemu, although at this moment there are limitations in the execution and it is considered as experimental configuration. To switch to the qemu hypervisor apply the following setting:

export LIBVIRT_DOMAIN_TYPE=qemu

You can check a list of all the environment variables here

Deploy the metal3 Dev env

Note: These scripts are invasive and will reconfigure part of the host OS in addition to package installation, and hence it is recommended to run dev-env in a VM. Please read the scripts to understand what they do before running them on your machine.

./01_prepare_host.sh
./02_configure_host.sh
./03_launch_mgmt_cluster.sh
./04_verify.sh

or

make

Deploy the target cluster

./tests/scripts/provision/cluster.sh
./tests/scripts/provision/controlplane.sh
./tests/scripts/provision/worker.sh

Pivot to the target cluster

./tests/scripts/provision/pivot.sh

Delete the target cluster

kubectl delete cluster "${CLUSTER_NAME:-"test1"}" -n metal3

Deploying and developing with Tilt

It is possible to use Tilt to run the CAPI, BMO, CAPM3 and IPAM components. Tilt ephemeral cluster will utilize Kind and Docker, so it requires an Ubuntu host. For this, run:

By default, Metal3 components are not built locally. To develop with Tilt, you must export BUILD_[CAPM3|BMO|IPAM|CAPI]_LOCALLY=true, and then you can edit the code in ~/go/src/github.com/metal3-io/... and it will be picked up by Tilt. You can also specify repository URL, branch and commit with CAPM3REPO, CAPM3BRANCH and CAPM3COMMIT to make dev-env start the component with your development branch content. Same for IPAM, BMO and CAPI. See vars.md for more information.

After specifying the components and paths to your liking, bring the cluster up by setting the ephemeral cluster type to Tilt and image OS to Ubuntu.

export IMAGE_OS=ubuntu
export EPHEMERAL_CLUSTER="tilt"
make

If you are running tilt on a remote machine, you can forward the web interface by adding the following parameter to the ssh command -L 10350:127.0.0.1:10350

Then you can access the Tilt dashboard locally here

Note: It is easiest if you configure all these in config_<username>.sh file, which is automatically sourced if it exists.

Recreating local ironic containers

In case, you want recreate the local ironic containers enabled with TLS, you need to use the following instructions:

source lib/common.sh
source lib/network.sh

export IRONIC_HOST="${CLUSTER_BARE_METAL_PROVISIONER_HOST}"
export IRONIC_HOST_IP="${CLUSTER_BARE_METAL_PROVISIONER_IP}"

source lib/ironic_tls_setup.sh
source lib/ironic_basic_auth.sh

cd ${BMOPATH}
./tools/run_local_ironic.sh

Here ${BMOPATH} points to the baremetal operator directory. For more information, regarding the TLS setup and running ironic locally please refer to these documents: TLS , Run local ironic.

Test Matrix

The following table describes which branches are tested for different test triggers:

test suffix CAPM3 branch IPAM branch BMO branch/tag Keepalived tag Ironic tag
main main main main latest latest
release-1-8 release-1.8 release-1.8 release-0.8 v0.8.0 v26.0.1
release-1-7 release-1.7 release-1.7 release-0.6 v0.6.1 v25.0.1
release-1-6 release-1.6 release-1.6 release-0.5 v0.5.1 v24.1.1