Mirantis Hybrid Multi Cluster is part of Mirantis Project 0x2A which is focused on delivering a open source approach to providing an enterprise grade multi-cluster kubernetes management solution based entirely on standard open source tooling that works across private or public clouds.
We like to say that Project 0x2A (42) is the answer to life, the universe, and everything ... Or, at least, the Kubernetes sprawl we find ourselves faced with in real life!
Detailed documentation is available in Project 0x2A Docs
kubectl apply -f https://github.com/Mirantis/hmc/releases/download/v0.0.4/install.yaml
or install using helm
helm install hmc oci://ghcr.io/mirantis/hmc/charts/hmc --version 0.0.4 -n hmc-system --create-namespace
Then follow the Deploy a managed cluster guide to create a managed cluster.
Note
The HMC installation using Kubernetes manifests does not allow customization of the deployment. To apply a custom HMC configuration, install HMC using the Helm chart.
See Install HMC for development purposes.
Mirantis Hybrid Container Cloud requires the following:
- Existing management cluster (minimum required kubernetes version 1.28.0).
kubectl
CLI installed locally.
Optionally, the following CLIs may be helpful:
helm
(required only when installing HMC usinghelm
).clusterctl
(to handle the lifecycle of the managed clusters).
Full details on the provider configuration can be found in the Project 2A Docs, see Documentation
export KUBECONFIG=<path-to-management-kubeconfig>
helm install hmc oci://ghcr.io/mirantis/hmc/charts/hmc --version <hmc-version> -n hmc-system --create-namespace
By default, the Hybrid Container Cloud is being deployed with the following configuration:
apiVersion: hmc.mirantis.com/v1alpha1
kind: Management
metadata:
name: hmc
spec:
providers:
- name: k0smotron
- name: cluster-api-provider-aws
- name: cluster-api-provider-azure
- name: cluster-api-provider-vsphere
- name: projectsveltos
release: hmc-0-0-4
There are two options to override the default management configuration of HMC:
-
Update the
Management
object after the HMC installation usingkubectl
:kubectl --kubeconfig <path-to-management-kubeconfig> edit management
-
Deploy HMC skipping the default
Management
object creation and provide your ownManagement
configuration:-
Create
management.yaml
file and configure core components and providers. See Management API. -
Specify
--create-management=false
controller argument and install HMC:
If installing using
helm
add the following parameter to thehelm install
command:--set="controller.createManagement=false"
- Create
hmc
Management
object after HMC installation:
kubectl --kubeconfig <path-to-management-kubeconfig> create -f management.yaml
-
To deploy a managed cluster:
-
Create
Credential
object with all credentials required.See Credential system docs for more information regarding this object.
-
Select the
ClusterTemplate
you want to use for the deployment. To list all available templates, run:
export KUBECONFIG=<path-to-management-kubeconfig>
kubectl get clustertemplate -n hmc-system
If you want to deploy hosted control plane template, make sure to check additional notes on Hosted control plane in 2A Docs, see Documentation.
- Create the file with the
ManagedCluster
configuration:
Note
Substitute the parameters enclosed in angle brackets with the corresponding
values. Enable the dryRun
flag if required.
For details, see Dryrun.
apiVersion: hmc.mirantis.com/v1alpha1
kind: ManagedCluster
metadata:
name: <cluster-name>
namespace: <cluster-namespace>
spec:
template: <template-name>
credential: <credential-name>
dryRun: <true/false>
config:
<cluster-configuration>
- Create the
ManagedCluster
object:
kubectl create -f managedcluster.yaml
- Check the status of the newly created
ManagedCluster
object:
kubectl -n <managedcluster-namespace> get managedcluster <managedcluster-name> -o=yaml
- Wait for infrastructure to be provisioned and the cluster to be deployed (the
provisioning starts only when
spec.dryRun
is disabled):
kubectl -n <managedcluster-namespace> get cluster <managedcluster-name> -o=yaml
Note
You may also watch the process with the clusterctl describe
command
(requires the clusterctl
CLI to be installed): clusterctl describe cluster <managedcluster-name> -n <managedcluster-namespace> --show-conditions all
- Retrieve the
kubeconfig
of your managed cluster:
kubectl get secret -n hmc-system <managedcluster-name>-kubeconfig -o=jsonpath={.data.value} | base64 -d > kubeconfig
HMC ManagedCluster
supports two modes: with and without (default) dryRun
.
If no configuration (spec.config
) provided, the ManagedCluster
object will
be populated with defaults (default configuration can be found in the
corresponding Template
status) and automatically marked as dryRun
.
Here is an example of the ManagedCluster
object with default configuration:
apiVersion: hmc.mirantis.com/v1alpha1
kind: ManagedCluster
metadata:
name: <cluster-name>
namespace: <cluster-namespace>
spec:
config:
clusterNetwork:
pods:
cidrBlocks:
- 10.244.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlane:
iamInstanceProfile: control-plane.cluster-api-provider-aws.sigs.k8s.io
instanceType: ""
controlPlaneNumber: 3
k0s:
version: v1.27.2+k0s.0
publicIP: false
region: ""
sshKeyName: ""
worker:
iamInstanceProfile: nodes.cluster-api-provider-aws.sigs.k8s.io
instanceType: ""
workersNumber: 2
template: aws-standalone-cp-0-0-3
credential: aws-credential
dryRun: true
After you adjust your configuration and ensure that it passes validation
(TemplateReady
condition from status.conditions
), remove the spec.dryRun
flag to proceed with the deployment.
Here is an example of a ManagedCluster
object that passed the validation:
apiVersion: hmc.mirantis.com/v1alpha1
kind: ManagedCluster
metadata:
name: aws-standalone
namespace: hmc-system
spec:
template: aws-standalone-cp-0-0-3
credential: aws-credential
config:
region: us-east-2
publicIP: true
controlPlaneNumber: 1
workersNumber: 1
controlPlane:
instanceType: t3.small
worker:
instanceType: t3.small
status:
conditions:
- lastTransitionTime: "2024-07-22T09:25:49Z"
message: Template is valid
reason: Succeeded
status: "True"
type: TemplateReady
- lastTransitionTime: "2024-07-22T09:25:49Z"
message: Helm chart is valid
reason: Succeeded
status: "True"
type: HelmChartReady
- lastTransitionTime: "2024-07-22T09:25:49Z"
message: ManagedCluster is ready
reason: Succeeded
status: "True"
type: Ready
observedGeneration: 1
- Remove the Management object:
kubectl delete management.hmc hmc
Note
Make sure you have no HMC ManagedCluster objects left in the cluster prior to Management deletion
- Remove the
hmc
Helm release:
helm uninstall hmc -n hmc-system
- Remove the
hmc-system
namespace:
kubectl delete ns hmc-system