Skip to content

using openstack as a VIM

Gerardo García edited this page Nov 28, 2016 · 9 revisions

This page is obsolete.

The project OpenMANO has been contributed to the open source community project Open Source MANO (OSM), hosted by ETSI.

Go to the URL osm.etsi.org to know more about OSM.


#Table of Contents#

#Introduction#

From version v0.4 openMano can use openstack as a VIM instead of openvim.

The python module that implements this functionality is vimconn_openstack.py. It uses openstack python clients.

Note for developers: other VIM connectors can be easily developed by inheriting from vimconn.py vimconnector class. The naming of the module must be exactly vim_conn_<datacenter type>.py

#Configure openmano controller#

Nothing is needed. From version v0.4 the authomatic installer (./scripts/install-openmano.sh) installs the needed openstack python client packages: python-novaclient python-keystoneclient python-glanceclient python-neutronclient

To run openmano component (neither openvim nor floodlight are needed) run:

    service-openmano openmano start 

#Configure openstack#

Tested on RH OSP 6 (Red Hat Openstack Platform 6, based on Juno release) and RH OSP 7 (Red Hat Openstack Platform 7, based on Kilo release).

  • Provide a mechanism to connect the SR-IOV interfaces. As they are physical ports connected to an external switch, Neutron will not be able to connect them. You can use a physical switch programmed to interconnect the vlan tags among them (not recommened for security reasons). Another solution is to use a ML2 plugin that programs the external switch. For example you can install the custom ML2 plugin detailed here (Not ready, comming soon).

  • Configure the Neutron controller for using SR-IOV ports:

    Edit /etc/neutron/plugins/ml2/ml2_conf.ini with the tag and vlan ranges used by the dataplane network

      [ml2_type_vlan] 
      network_vlan_ranges = physnet_sriov:3000:3100
    
  • Configure the compute nodes for using SR-IOV

    On each compute node you have to associate the VFs available to each physical network. That is performed by configuring pci_passthrough_whitelist in /etc/nova/nova.conf. So, for example:

      pci_passthrough_whitelist = {"vendor_id":"8086", "product_id":"10ed","physical_network":"physnet_sriov"} 
    
  • Create tenant networks to be used as the control plane by the openmano scenarios (the example scenarios need a network called "default"). These networks must be "shared". If you need external connectivity, you have to connect this tenant network to a public network available in your Openstack. For example if your public network is called "public", we can create a "default" tenant network and a router to connect it with the "public" network:

      neutron net-create default --shared
      #personalize with your IP ranges
      #dns-nameserver is needed to populated at resolv.conf
      neutron subnet-create default 192.168.10.0/24 --gateway=192.168.10.1 --enable-dhcp  --dns-nameserver=<your-DNS-servier-IP> --name=default_subnet
      neutron router-create default2public
      #'public' is the name of your existent network, use the proper name 
      neutron router-gateway-set default2public public
      neutron router-interface-add default2public default_subnet
    
  • Create a valid tenant/user You need to create a tenant/user with rights to create/delete images and flavors. One option is to use the admin tenant. The other is to change the flavor/images management policies of your tenant/user by changing at /etc/nova/policy.json

      "compute_extension:v3:flavor-manage": ""
      #To complete
    
  • Upload images. (optional). Openmano will create the needed images into openstack at deploying, but the process takes a long time. You can avoid the waiting by loading manually the needed images into openstack. If so, it is very important to insert a "location" key in the image metadata. Openmano uses this metadata instead of the name to localize the image. The "location" content of the image metadata must be the same of the "VNFC":"VNFC image" field in the openmano VNF descriptor. If you do not load the images manually, then openmano will use this location to load the image into openstack, so that this field must contain a reachable (by openmano) path or url. However if you will upload the image on openstack manually it is not necesary to be real path/url.

      #If you upload from a local file:
      glance image-create --file=<file> --is-public=True --container-format=bare --name=<name> --disk-format=<qcow2, raw, ...> --min-disk=<e.g. 2>
      #insert the 'location' metadata:
      nova image-meta <name-or-uuid>  set location=/mnt/repository/linuximage.qcow2  #not needed to be a reachable path
    
      #If you download from the web, e.g. cirros:
      glance image-create --location="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img" --is-public=True --container-format=bare --name=cirros-cloud --disk-format=qcow2 --min-disk=1
      #insert the 'location' metadata:
      nova image-meta <name-or-uuid>  set location=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
    

#Getting started#

See the script ./openmano/test/test_os.sh (section "create"). If you want to use it you need to create the environment variables fist with your local parameters, for example:

    export OS_USERNAME=admin   #we can use a user with rights to create/delete flavors/images
    export OS_PASSWORD=admin   
    export OS_AUTH_URL='http://<openstack ip>:35357/v2.0'
    export OS_TENANT_NAME=admin 
    #physical network, tag used for [ml2_type_vlan]:network_vlan_ranges at '/etc/neutron/plugins/ml2/ml2_conf.ini'
    export OS_CONFIG="dataplane_physical_net: physnet_sriov" 
    #images to be used. It will overwrite the "VNFC":"VNFC image" field of the VNF descriptors
    export OS_TEST_IMAGE_PATH_CIRROS="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img" #same as image "location" metadata
    export OS_TEST_IMAGE_PATH_LINUX="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img"
    export OS_TEST_IMAGE_PATH_LINUXDATA=/mnt/repository/linuximage.qcow2 #same as image "location" metadata
  • Create a tenant at openmano if it was not done before:

      ./scripts/service-openmano.sh openmano start
      ./openmano tenant-create mytenant-os --description=tenant-description
      #- Take the uuid and update the environment variable associated to the openmano tenant:
      export OPENMANO_TENANT=<obtained uuid> 
    
  • Add openstack as a VIM for openmano:

      ./openmano datacenter-create  myos "$OS_AUTH_URL" --type=openstack --config="$OS_CONFIG"
      #OS_AUTH_URL is the url of openstack controller, e.g http:/openstack:35357/v2.0
      #OS_CONFIG (optional), put the name of the physical dataplane net, e.g.: 'nplane_physical_net: sriov_net'
      	
      #Attach openstack tenant to this datacenter
      ./openmano datacenter-attach myos --user="$OS_USERNAME" --password="$OS_PASSWORD" --vim-tenant-name="$OS_TENANT_NAME" #optionally --vim-tenant-id=<OS_TENANT_ID>
      
      #Take the uuid and update the environment variable, needed when several datacenters are added:
      export OPENMANO_DATACENTER=<myos uuid>
      ./openmano datacenter-list                                 #must show 'myos' datacenter
      #incorporate shared nets
      ./openmano datacenter-net-update myos
    
  • Create VNF, scenarios, deploy in the normal way, e.g. in the test:

      ./openmano vnf-create vnfs/examples/linux.yaml  --image-path="$OS_TEST_IMAGE_PATH_LINUX"
      #--image-path is optional to use THIS image path instead of the one in the yaml file
      ./openmano vnf-create vnfs/examples/dataplaneVNF_2VMs.yaml --image-path="$OS_TEST_IMAGE_PATH_LINUXDATA,$OS_TEST_IMAGE_PATH_LINUXDATA"
      ./openmano vnf-create vnfs/examples/dataplaneVNF3.yaml --image-path="$OS_TEST_IMAGE_PATH_LINUXDATA"
      	
      ./openmano scenario-create scenarios/examples/simple.yaml
      ./openmano scenario-create scenarios/examples/complex2.yaml
      
      ./openmano scenario-deploy simple simple-instance    #good luck :-)
      ./openmano scenario-deploy complex2 complex2-instance
    

#Pending issues#

  • Currently, openstack has not any mechanism to connect PF (passthrough) ports, therefore when using openstack the connector returns and error
  • Currently there is not a mechanism to set the virtual PCI address at the VM in openstack. so that the connector ignores it.
  • Some of openmano core/paired hyperthreading requirements are not yet ready at openstack
  • Openmano has not implemented a mechanism to push user metadata at Virtual machines (and at VNF), as cloud init, configuration scripts, ssh keys, ...