In an OpenStack deployment, when packets from a nova instance need to reach the external world, they first go through the virtual switch (OpenvSwitch or Linux Bridge) for layer-2 switching, then through the physical NIC of the compute host, and then leave the compute host. Hence, the path of the packets is:
When, single Root I/O Virtualisation (SR-IOV) and PCI pass-through are deployed in OpenStack, the packets from the nova instance do not use the virtual switch (OpenvSwitch or Linux Bridge). They directly go through the physical NIC, and then leave the compute host, thereby bypassing the virtual switch. In this case, the physical NIC does the layer-2 switching of the packets. Hence, the path of the packets is:
A virtual switch (OpenvSwitch or Linux Bridge) may also be used with SR-IOV if an instance needs a virtual switch for layer-2 switching. A virtual switch is just software and obviously does not scale well like a physical NIC when the packet rate is high.
In order to deploy SR-IOV on Cisco's UCS servers, we need the UCSM ml2 neutron plugin developed by Cisco. Here are few links about the UCSM ml2 neutron plugin:
Create an SR-IOV neutron port by using the argument "--binding:vnic-type direct". Note the port ID in the output below. It will be needed to boot the nova instance and attach it to the SR-IOV port.
neutron port-create provider_net --binding:vnic-type direct --name=sr-iov-port
Boot a nova VM and attach the SR-IOV port to it.
nova boot --flavor m1.large --image RHEL-guest-image --nic port-id=<port ID of SR-IOV port> --key-name mc2-installer-key sr-iov-vm
Once the instance is up, ping it and SSH into it (security groups do not work for SR-IOV ports. Please refer the "Known limitations of SR-IOV ports" section at the end of this blog). Make sure that you can ping the gateway of the instance. The packets from the instance will not go through the virtual switch (OpenvSwitch or Linux Bridge). They directly go through the physical NIC of the compute host, and then leave the compute host, thereby bypassing the virtual switch.
Below are the steps to verify SR-IOV (VF) in the UCSM GUI.
Below are the steps to verify that the SR-IOV port does not use OpenvSwitch and bypasses it when sending/receiving packets:
1. Find the compute node on which sr-iov-vm is running.
2. Find the port ID of the SR-IOV neutron port. d8379d3e-8877-4fec-8b50-ab939dbca797 is the port ID of the SR-IOV port in this case.
# neutron port-list | grep sr-iov-port
3. SSH into the compute host (mc2-compute-12 in this case) and make sure that:
1. There is no interface with d8379d3e in its name on the compute host, and,
2. There is no interface with d8379d3e in the output of "ovs-vsctl show".
# ssh root@mc2-compute-12
[root@mc2-compute-12 ~]# ip a | grep d8379d3e
[root@mc2-compute-12 ~]# ovs-vsctl show | grep d8379d3e
The above outputs must be empty. This shows that the SR-IOV port is not using OpenvSwitch for layer-2 switching and the physical NIC connected to the SR-IOV port does the layer-2 switching for packets from the instance sr-iov-vm.
On the other hand, in case of an instance booted with a regular OVS (non-SR-IOV virtio) port, the above outputs will not be empty. Below is an example of regular OVS (non-SR-IOV virtio) port (ID f9936cb7-885d-451d-9fc2-a86a670be732) attached to a instance. This OVS (non-SR-IOV virtio) port is part of the br-int (OVS integration bridge) in the output of "ovs-vsctl show".