You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we are currently in the process of deploying 20+ Openshift clusters in a small fabric environment (single site, single pod). But our environment consists of multiple tenants, each with multiple VRFs.
ACI 6.03(e)
Openshift 4.14
We now ran into an issue where acc-provision cannot deploy the ACI ressources as planned.
acc configuration:
Usage of pre-existing tenant: TENANT_A
Tenant/VRF: TENANT_A / VRF_A
L3Out in common tenant (shared for all OCP clusters)
If we do this, acc-provision is running into an error cause it tries to find the L3Out in TENANT_A. But the L3Out is actually in common tenant.
If we change the Tenant/VRF configuration to common tenant, acc-provision runs fine. But then we have the cluster BDs/EPGs also in common tenant.
I already took a look into the script. But can't figure out, if changing the script deployment would be enough since there are also settings going into the manifests for OCP.
And we already tried to move the BDs manually to TENANT_A. If we do this, the aci controller pod is running into a panic error.
Cheers
Christian
The text was updated successfully, but these errors were encountered:
Hello,
we are currently in the process of deploying 20+ Openshift clusters in a small fabric environment (single site, single pod). But our environment consists of multiple tenants, each with multiple VRFs.
ACI 6.03(e)
Openshift 4.14
We now ran into an issue where acc-provision cannot deploy the ACI ressources as planned.
acc configuration:
If we do this, acc-provision is running into an error cause it tries to find the L3Out in TENANT_A. But the L3Out is actually in common tenant.
If we change the Tenant/VRF configuration to common tenant, acc-provision runs fine. But then we have the cluster BDs/EPGs also in common tenant.
I already took a look into the script. But can't figure out, if changing the script deployment would be enough since there are also settings going into the manifests for OCP.
And we already tried to move the BDs manually to TENANT_A. If we do this, the aci controller pod is running into a panic error.
Cheers
Christian
The text was updated successfully, but these errors were encountered: