You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It just hit me that the "umbrella" kind of issues common in Kubernetes are the perfect way to track the implementation of major features, requiring multiple PRs.
Better late than never (?)
So, what is happening with the recent DANM 4.0 titled tickets, you could ask? Well, we are only completely re-working the network management APIs of DANM (in a 100% backward compatible way, no worries) so the project becomes the perfect fit for production grade, bare metal, multi-tenant data center solutions!
We think the end result will be quite unique in the whole Kubernetes ecosystem - something we always aspire to with each and every feature we do.
Somebody needs to push the boundaries, eh? :)
Background
The problem DANM, and literally (off: yes I can literally use literally in the place of figuratively and still speak perfect English :) ) every network management project has is the problem of finding the perfect balance between roles and responsibilities, and operability.
Namely: who is the right guy, or gal to administer the network management API in a Kubernetes cluster, and/or in a Kubernetes tenant?
Is it the application deployment engineer, constricted to a tenant?
But how would an app developer even know what physical interfaces the data center's machines have, what VLANs are configured in the switches for flat L3 networks (something we specialize in). I hope you are not allowing direct SSH access to your host machines!
And we haven't even touched name of CNI config files, or the knowledge of which physical interfaces are even allowed to be touched by tenant, and which are dedicated to some other purpose, e.g. infrastructure, storage, law enforcement etc.
Is it the cluster's network administrator, having complete control over the networks of all tenants?
Sounds like a better fit right? On paper at least.
But don't be surprised to see the resignation letter of your netadmin after she arrives to the office only to see there are 654 "please give me an internal network for tenant XYZ" requests waiting for her in her mailbox.
So, what's the solution here? Fortunately the folks at Openstack already figured out the almost perfect solution: different APIs for different purposes
But then we might as well make them entirely dynamic, and ease to use, right?
DANM 4.0 APIs
So, going forward we will introduce 3 new CRD-based APIs to DANM, in addition to DanmNet: TenantConfigs, TenantNetworks, and ClusterNetworks. ClusterNetworks are like DanmNets: you, as the cluster's network administrator can configure any of their attributes - but they are namespaceless, cluster-wide resources.
Want to provide external network connectivity to multiple tenant users, without them having the ability to create one for themselves? This is your API!
TenantNetworks are also like DanmNets in a sense, because they too are namespaced objects. You can create them for your own needs inside your tenant, no need to pester your most probably overwhelmed netadmin.
But, you don't have control over attributes which are related to the physical properties of your logical network, like physical devices, and VNIs.
Wait, but if the TenantNetworks still need to be manually modified by the cluster's netadmins, what did we gain?
Here is where TenantConfigs enter the picture! Cluster netadmins only need to configure the physical resources usable by the the TenantNetworks once, and also in the Kubernetes API.
DANM will take care of automatically assigning all the physical details for your user's TenantNetworks, while you can enjoy your margaritas with their little umbrellas :)
It doesn't matter if your users want to use dynamic or static backends, SR-IOV or IPVLAN, DANM has got you covered!
Implementation
In order to have 100% feature parity with both the "simplified", and the "production grade" network manager APIs, we have a lot to do.
1: we need a Mutating Webhook capable of validating the existing API #82
It just hit me that the "umbrella" kind of issues common in Kubernetes are the perfect way to track the implementation of major features, requiring multiple PRs.
Better late than never (?)
So, what is happening with the recent DANM 4.0 titled tickets, you could ask? Well, we are only completely re-working the network management APIs of DANM (in a 100% backward compatible way, no worries) so the project becomes the perfect fit for production grade, bare metal, multi-tenant data center solutions!
We think the end result will be quite unique in the whole Kubernetes ecosystem - something we always aspire to with each and every feature we do.
Somebody needs to push the boundaries, eh? :)
Background
The problem DANM, and literally (off: yes I can literally use literally in the place of figuratively and still speak perfect English :) ) every network management project has is the problem of finding the perfect balance between roles and responsibilities, and operability.
Namely: who is the right guy, or gal to administer the network management API in a Kubernetes cluster, and/or in a Kubernetes tenant?
Is it the application deployment engineer, constricted to a tenant?
But how would an app developer even know what physical interfaces the data center's machines have, what VLANs are configured in the switches for flat L3 networks (something we specialize in). I hope you are not allowing direct SSH access to your host machines!
And we haven't even touched name of CNI config files, or the knowledge of which physical interfaces are even allowed to be touched by tenant, and which are dedicated to some other purpose, e.g. infrastructure, storage, law enforcement etc.
Is it the cluster's network administrator, having complete control over the networks of all tenants?
Sounds like a better fit right? On paper at least.
But don't be surprised to see the resignation letter of your netadmin after she arrives to the office only to see there are 654 "please give me an internal network for tenant XYZ" requests waiting for her in her mailbox.
So, what's the solution here? Fortunately the folks at Openstack already figured out the almost perfect solution: different APIs for different purposes
But then we might as well make them entirely dynamic, and ease to use, right?
DANM 4.0 APIs
So, going forward we will introduce 3 new CRD-based APIs to DANM, in addition to DanmNet: TenantConfigs, TenantNetworks, and ClusterNetworks.
ClusterNetworks are like DanmNets: you, as the cluster's network administrator can configure any of their attributes - but they are namespaceless, cluster-wide resources.
Want to provide external network connectivity to multiple tenant users, without them having the ability to create one for themselves? This is your API!
TenantNetworks are also like DanmNets in a sense, because they too are namespaced objects. You can create them for your own needs inside your tenant, no need to pester your most probably overwhelmed netadmin.
But, you don't have control over attributes which are related to the physical properties of your logical network, like physical devices, and VNIs.
Wait, but if the TenantNetworks still need to be manually modified by the cluster's netadmins, what did we gain?
Here is where TenantConfigs enter the picture! Cluster netadmins only need to configure the physical resources usable by the the TenantNetworks once, and also in the Kubernetes API.
DANM will take care of automatically assigning all the physical details for your user's TenantNetworks, while you can enjoy your margaritas with their little umbrellas :)
It doesn't matter if your users want to use dynamic or static backends, SR-IOV or IPVLAN, DANM has got you covered!
Implementation
In order to have 100% feature parity with both the "simplified", and the "production grade" network manager APIs, we have a lot to do.
1: we need a Mutating Webhook capable of validating the existing API
#82
2: we need to introduce the new APIs
#89
3: we need to validate the new APIs
#91
4: we need to introduce mutating logic for the TenantNetworks, based on the TenantConfig API
#94
5: we need to adapt netwatcher so it recognizes the new network management APIs too
#97
6: we need to adapt the CNI code so it can work with the new APIs with 100% feature parity
#99
7: we need to adapt svcwatcher component so multi-network Service Discovery works with the new APIs too
#101
The text was updated successfully, but these errors were encountered: