cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
10348
Views
15
Helpful
12
Replies

Ask the Expert: Server I/O: Unleashing the potential of the UCS Cisco Virtual Interface Card (VIC)

ciscomoderator
Community Manager
Community Manager

Server I/O: Unleashing the potential of the UCS Cisco Virtual Interface Card (VIC) with Robert Burns - Read the bioWith Robert Burns

Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Cisco's line of VICs adapters for the Unified Computing System (UCS) B and C Series with Cisco expert Robert Burns. This includes installation, configuration, design and troubleshooting of VICs like: Cisco UCS 1200 Series, B-Series Blade servers (VIC 1240, 1280, M81KR), and C-Series Rack Server VICs (1225 & P81E)

Robert Burns is a technical leader for advanced engineering in Cisco's Datacenter Technologies group, specializing in data center technologies and cloud computing. He facilitates beta hardware and software support with unified computing system (UCS) customers, has helped develop and deliver UCS bootcamp training, and designs Cisco Data Center certifications including UCS specialist exams and Data Center CCIE designations. He is currently leading various data center study groups for the CCNA and CCIE certifications. He has 15 years of industry experience and previously held roles as systems administrator for WIS International and IT manager for Dave & Buster's Canada. Burns holds a computer information systems degree in information technology from Humber College.

Remember to use the rating system to let Robert know if you have received an adequate response.

Robert might not be able to answer each question due to the volume expected during this event.

Remember that you can continue the conversation on the Data Center sub community shortly after the event. This event lasts through April 5, 2013. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

12 Replies 12

Hi Robert,

I'm having hard time understanding all types & models of the cards available and will appreciate if you can clarify this a little bit. If I understand this correctly, there are three types (groups) of cards:

a) the most simple ones, called NIC cards, only have 10Gbps Ethernet interface

b) more complex cards, called CNAs, provide both HBA and Ethernet interfaces

c) most complex cards, not sure what is the name of this type, contains multiple CNA interfaces

I'm not even sure if there is another type/group. Also, I know there are many models but I was unable to find which model belongs to which group and there are so many of them: VIC, 1240, 1280, M81KR, M71KR, 82598KR...

Additionally, what I'm calling card here is peace of hardware that must go into blade so it can communicate with the rest of the world, but I'm not sure if word card should be replaced with adapter or "mezzanine card". Is VIC (Virtual Interface Card) name for my third card type or is used for all three types?

Thanks,

Tenaro

Great question to start the discussion Tenaro!

When dealing with servers we'll normally classify I/O adapters into three categories (some of which you've correctly identified):

1. NIC (Network Interface Card).  Purely ethernet only.  With UCS racks & blades these come in 1G and 10G varieties.  Examples include the Intel 82598KR (M61KR-I) and Broadcom M51KR-B.

2. HBA (Host Bus Adapter). Pure fibre channel only.  UCS rack servers support HBAs from Qlogic and Emulex at either 4Gb/s, 8Gb/s and soon 16Gb/s. This category include PCIe cards such as Emluex LPe12002 and Qlogic QLE2562.  Note that we do not offer any pure HBA mezz options for blades, only racks.  Reason is that there really isn't much of a different in price versus a CNA (below) which offers both Ethernet & FC functionality.

3. CNA (Converged Network Adapter.  These contain both ethernet & fibre channel elements and are the basis for unified fabric design. UCS servers will support CNAs from Qlogic, Emulex and Cisco.  Examples of this group include the M71KR-E/Q, M81KR and VIC line of Cisco adapters.  Where the Cisco adapter differentiates is that it's has Virtualization and Failover capabilities where as the Emulex & Qlogic cards do not. 

The most popular adapter on B-Series by far are the VICs.  Over the past 5 years, the prices for CNAs has dropped tremendously which combined with the features and capabilities make it an extremely effective choice.   Whether referring to "PCIe Card" or "Mezz Card" normally refers to the form factor.  All blades have "mezzanine" slots, rather than standard PCIe slots as you might find on a rack server, so we'll commonly hear "Mezz cards" when talking about adapters for blades, and "adapters" or "PCIe cards" when dealing with rack servers.  Our product names started off with having a "M" or "P" in the Product ID, which was to easiliy identify them as Mezz or PCIe cards (blades vs rack types).  As marketing is constantly evovling, we've now moved to the "VIC" naming for our CNAs with a numeric card series and "lane count" in the name.  (More on this shortly)

Cisc's first generation of CNA (M81KR and P81e) were extremely successful and we're currently offering our second generation CNA (known by the VIC 1240/1280 for blades, and the VIC 1225 for racks).  Focusing on the current blade VICs, there are two offerings.  The VIC 1240 is offered as a mLOM (Modular LAN on Motherboard) on M3 blades which leave the blades Mezz slot available for another adapter.  It's a full blown VIC so you can create multple virtual NICs and HBAs just as you can with any Virtual Interface Card.  Where the 1240 and 1280 differ is the VIC 1280 is a mezzanine card and would go into Mezz slot.  It also boasts 4 x 10G PCI lanes to each fabric.  The VIC 1240 by itself supports 2 x 10G PCI lanes to each fabric, providing a total of 40G of bandwidth to the blade.  This is still plenty of bandwidth and very well suited for bandwidth intensive and bursty applications.  The VIC 1240's performance can also be expanded with the addition of the VIC Port Expander.  This will add another two 10G lanes making it the equivelant to the 1280 providing 80G total bandwidth to the blade.  This is a great option to start with the VIC 1240, then toss in the Expander (which is cheaper than a VIC 1280) to "pay as you grow" when you need the extra bandwidth. 

I'll pause there and see if this answers your original question.  From here I'm happy to take any follow up questions.

Regards,

Robert

Great reply, thanks!

Just to confirm, while M1 and M2 versions require mezzanine cards in the blades, M3 series is coming with built-in 1240 on motherboard so that mezzanine slot can be used for expansion, correct?

As you are so helpful, I'd like to share with you another issue that bothers me: VN-link and CNA virtualization. I'm having problem seeing real benefits of VIC cards: as UCS is using ESXi to separate VMs from the hardware itself, we can assign every virtual machine one or more virtual network cards and thanks to the virtual switch inside ESXi all that traffic will use same physical network interface. In other words, what is the benefit of having M81KR with 128 virtual network interfaces? I don't think I can assign each of 128 virtual interfaces to separate VM and claim that every VM will have its dedicated 10Gbps just for itself when communicating with the rest of the world?

Just to confirm, while M1 and M2 versions require  mezzanine cards in the  blades, M3 series is coming with built-in 1240  on motherboard so that  mezzanine slot can be used for expansion,  correct?

Correct.  Saving the mezzanine  slot available for another card is a great advantage.  In addition to  network adaptors we can likely expect a GPU and flash storage options to  come availalbe in the next while to fill that vacant Mezz slot.

Great follow up question again. VN-Link is a  collection of hardware & software features that include things like  VN-Tag which allow UCS to distinguish between the various virtual  interface that all share the same adapter & uplinks.  Now to cover  the benefits.  Since you mention ESX I'll use that as an example.  A  standard ESX (bare metal) host will normally require access to various  resource which could include:

Management Network

FC Storage

iSCSI/NFS/IP Storage Network

VMotion Network

Fault Tollerance (FT) Network

Backup Network

Various VM Data Networks

The three major benefits can be summed up as follows:

Bandwidth Management / QoS / Network Policies

Leveraging  the VN-Link capabilities of the VIC allows you to carve up separate  virtual NICs/HBAs for each function and present each one to the host as a  unique PCI device.  Additionally I can apply specific network policies  to each one.  A very common example would be assigning different QoS  settings to each virtual interface.  For example iSCSI interface  benefits greatly from jumbo frames, Management access may be lower  priority than IP Storage or VM Data - being able to fine tune your QoS  based on network or function is essential when you realize that as CPU's  get faster, and Memory density increases are going to allow for more  VMs to be running on each hypervisor.  Could we just cram another 10G  adapter in the host - sure if you have an avaialble Mezz slot. However  its far more efficent to leverage QoS and manage oversubscription to  ensure proper bandwidth & service levels for your guest VMs.  This  is one area we do things far better (in my opinion) than some of our  competators such as HP.  HP's FLEX-10 adapters allow a logical  segmentation of their 10G adapters into up to four sub-devices.  The  only problem is the bandwidth for each sub-adapter is a set as a hard  limit.  The VIC leverages a "minimum guarantee" - so any bandwidth not  being used on one queue can be used anywhere else its needed.  This results in far great bandwidth efficiency.  Additional policies you can set per vNIC include, flow control, CDP, Allowed VLANs, Tx/Rx Queue size, and uplink pinning.

High Performance with Hypervisor Bypass/VM DirectPath

Hypervisor  bypass is the ability for a VM to access PCIe adapter  hardware  directly in order to reduce the overhead on a physical VMware’s  hosts  CPU.  Using the VICs ability to create a very high # of Virtual  interface allows us to present these NICs directly to the VM. The  advantage here is less host CPU/memory overhead for I/O  virtualization.  Removing the Hypervisor's network virtualization layer can provide considerable performance gains.

VM-FEX

Virtual  Machien Fabric Extension is a term we give for the VICs ability to  leverage its capability to create a distributed virtual switch that is managed from UCSM.  Very similar to the behavior of the Nexus 1000v,  each VIC will provide dynamic vNICs to VMs, all managed by UCSM which again bypass the hypervisor to achive performance gains. 

You  are correct that even though we can present one of these virtual NICs  to the VMs directly, they will still be goverened by the underlying VICs  capability whether its 10G, 20G or 40G per fabric.  The goal is to provide a method to finely tune your I/O requirements all from a  centralized source - UCSM.

Regards,

Robert

craiford
Level 1
Level 1

How can monitor performance of FC ports on a 6120 (from 6120 to SAN infrastructure), IO modules in the chassis, and VIC in B series?  We are over subscribed in bandwidth and I need to track utilization.

Thanks,

Charles

@craiford@forestpharm.com

Charles,

Monitoring performance on for interfaces within the system can be done from a few places depending on what you're looking for.  You can easily monitor FC ports for CRC, Signal errors, drops etc from the Statistics page when selecting a particular interface from the Equipment Tab.  If you're looking to monitor more for performance you can leverage a Threshold Policy which will generate alerts when an interface reaches or exceeds a certain TX/RX value.  You'll find a Threshold Policy available under the LAN and SAN tab for their respective interface types.  If you want to view live performance data, you can drop into the NXOS CLI context and look at the input/output rate of any interface - virtual or physical. 

Ex.  (Keep in mind this is a lab system will no traffic flow currently)

cae-dev-A(nxos)# show int fc1/33 counters brief

-------------------------------------------------------------------------------

Interface            Input (rate is 1 min avg)    Output (rate is 1 min avg)

                     ---------------------------  -----------------------------

                     Rate     Total               Rate     Total

                     MB/s     Frames              MB/s     Frames

-------------------------------------------------------------------------------

fc1/33              1499     1970665                2     5117759

cae-dev-A(nxos)# show int fc1/33 counters

fc1/33

    1 minute input rate 256 bits/sec, 32 bytes/sec, 0 frames/sec

    1 minute output rate 6936 bits/sec, 867 bytes/sec, 0 frames/sec

    1970665 frames input, 2088440596 bytes

      0 discards, 0 errors, 0 CRC

      0 unknown class, 0 too long, 0 too short

    5117759 frames output, 9468205792 bytes

      0 discards, 0 errors

    0 input OLS, 1 LRR, 0 NOS, 0 loop inits

    2 output OLS, 2 LRR, 0 NOS, 0 loop inits

    0 link failures, 0 sync losses, 0 signal losses

     0 BB credit transitions from zero

      16 receive B2B credit remaining

      250 transmit B2B credit remaining

      0 low priority transmit B2B credit remaining

cae-dev-A(nxos)#

The same could be done for ethernet, and port channel interfaces.  The important aspect is to set yourself a baseline. Take some readings from your system under different circumstances.  No load, regular load, and heavy load.  This way you can compare your baselines to any reading you take and get an idea of how the system is functioning and whether or not you are saturating links. 

When you want to look at different links within the system other than your vNIC & vHBA virtual interfaces or External interfaces we need to dive a little deeper into the sytem.  Let's say we want to monitor the rate of traffic between an adaptor's DCE interface and the IOMs backplane.  Depending on which IOM you have (2100 vs 2200 series) there are different commands for each.  Why? - The IOM rate commands reference the ASIC name on the IOM.  "Redwood" is the ASIC name for the 2100 Series IOM, and "Woodside" is for 2200 series.

To view these stats, we'll need to connect to the IOM directly.  For the output below, I'm connecting from Fabric-A to the IOM in Chassis 1.  The IOM is a 2204XP.  For my system I'll be using the "woodside" command, but you can easily substitute the same command for "redwood" if you're using the 2104 IOM.

cae-dev-A# connect iom 1

Attaching to FEX 1 ...

To exit type 'exit', to abort type '$.'

fex-1# show platform software woodside rate+--------++------------+-----------+------------++------------+-----------+------------+-------+-------+---+
| Port   || Tx Packets |  Tx Rate  |   Tx Bit   || Rx Packets |  Rx Rate  |   Rx Bit   |Avg Pkt|Avg Pkt|   |
|        ||            |  (pkts/s) |    Rate    ||            |  (pkts/s) |    Rate    | (Tx)  | (Rx)  |Err|
+--------++------------+-----------+------------++------------+-----------+------------+-------+-------+---+
| 0-BI   ||         22 |         4 |   4.00Kbps ||         13 |         2 |   5.41Kbps |    94 |   240 |   |
| 0-CI   ||         36 |         7 |  12.97Kbps ||         52 |        10 |  62.63Kbps |   205 |   732 |   |
| 0-NI3  ||      73526 |     14705 |  11.76Mbps ||    3796818 |    759363 |   9.36Gbps |    80 |  1521 |   |
| 0-NI2  ||          2 |         0 |   3.48Kbps ||          5 |         1 |   2.60Kbps |  1072 |   305 |   |
| 0-NI1  ||          2 |         0 |   3.48Kbps ||         39 |         7 |   8.96Kbps |  1072 |   123 |   |
| 0-NI0  ||         58 |        11 |  57.20Kbps ||          9 |         1 |   3.29Kbps |   596 |   209 |   |
| 0-HI3  ||    3795538 |    759107 |   9.36Gbps ||          0 |         0 |   0.00 bps |  1521 |     0 |   |
| 0-HI1  ||          1 |         0 | 480.00 bps ||      73501 |     14700 |  11.76Mbps |   282 |    80 |   |
+--------++------------+-----------+------------++------------+-----------+------------+-------+-------+---+

The main interfaces we're concerned with are the HI (Host Interface) and NI (Network Interfaces).  The Host interfaces are the DCE interfaces connecting to the server's adapter.  The Network Interface is the external interfaces on the back of the IOM, which connects up to the Fabric Interconnect.   The two values colored green show a heavy traffic flow coming into the IOM on Network Interface 3, and exiting the IOM towards a server on Host Interface 3.  (I was running iperf to generate some load).

As you can see, the NXOS context is very good for looking at live line rates of interfaces.  You can see much of the interface rate information in the GUI using the stats views, but I personally prefer dropping into the CLI, and capturing outputs for the purpose of watching link performance.  (Much easier to copy and paste into a spreadsheet for tracking also). 

Hope this helps!

Robert

Nigel Pyne
Level 1
Level 1

Hi Robert

I have a question around the use of the VIC card on a blade where the Nexus 1000v is also installed. Is the 1000v able to levarge any of the abilities of the VIC? i.e. if I have segmented the VIC up into several NICs with specific QoS settings on each can this functionality be used by the 1000v?

I know with the 1000v installed we couldn't utilise the VM DirectPath but can we still gain benefit from the VIC and its QoS design?

Or will the benefits of the VIC card be lost when the 1000v is implemented?

Also, if FCoE is running how would you manage QoS with the VIC and 1000v? Can the FCoE link be bound to a VIC virtual HBA that also appears in the 1000v for QoS to be taken into account? Or would the FCoE link not appear in the 1000v uplinks and so complicate QoS calculations on the uplinks?

Regards

Nigel

@Nigel

I'll take your questions one at time.

I have a question around the use of the VIC card on a blade where the  Nexus 1000v is also installed. Is the 1000v able to levarge any of the  abilities of the VIC? i.e. if I have segmented the VIC up into several  NICs with specific QoS settings on each can this functionality be used  by the 1000v?

The VIC and the 1000v combined provide the best solution for both flexibity and performance in my opinion.  A perfect example of this is doing what you have and dividing up the VIC into multiple virtual NICs so that you can assign unique QoS levels to each one.  In this regard you would likely have a different set of uplinks from your 1000v for each QoS level.  Then depending on the Tier of VM, you would assign that VLAN to a particular uplink.  Grouping your VMs into varous tiers of service level is very common.  We see customers doing this for internal departments, ranking each one by different tiers, but another major implementation of this involves multienancy, where a Cloud provider has customers of varying service levels.  The more you pay, the higher service level (QoS) you'll be assigned.  This helps keep critical VMs on top, while less critical VMs serviced by "best-effort" levels.  

An alternate design would be to set all QoS values on the 1000v.  This would require setting the QoS policy of the vNIC to "Host Control".  By doing this the QoS/CoS values assigned by the 1000v, will be maintained and honored by UCS.  This is a little tricker in that you have to match your 1000v QoS levels with UCS.  This is a good fit for non-virtualized adapter, where you don't have the flexibility the VIC provides in being able to create mulitple vNICs and assign unique QoS values to each one. 

I know with the 1000v installed we couldn't utilise the VM DirectPath  but can we still gain benefit from the VIC and its QoS design?  Or will the benefits of the VIC card be lost when the 1000v is implemented?

Yes you can still leverage all the QoS modularity the VIC provides.  There is a slight performance the VM-FEX offering w/ VMdirect Path provide, but its not nearly as featured as the full blown 1000v.   The 1000v and VIC are the best combination for flexibility and work across any host, not just UCS blades.  There are also some limitations when using VMDirect Path also (Vmotion etc) which require min. versions of ESX to workaround.  The 1000v and VIC on the other hand are not nearly as limited. 

Also, if FCoE is running how would you manage QoS with the VIC and  1000v? Can the FCoE link be bound to a VIC virtual HBA that also appears  in the 1000v for QoS to be taken into account? Or would the FCoE link  not appear in the 1000v uplinks and so complicate QoS calculations on  the uplinks?

FCoE is just an ethernet transport for FC.  When applying QoS in UCS, you're assigning it to the vnic layer.  FC traffic will always be assigned to the No-Drop class which by default gets a 50/50 weighting against other traffic.  As you enable additional CoS queues, this weighting will be automatically adjusted, but will always preserve the No-Drop feature for FC traffic.  FCoE links are really transparent to the 1000v. All the 1000v has visiblity to are 10G ethernet adaptor it can use as uplinks.  That's the beauty of an FCoE CNA (Converged Network Adapter).  The host views the CNA as a set of Ethernet and HBA adaptors - so the underlying OS can leverage existing drivers and protocol stacks to communicate with them.  This keeps your QoS simple and unchanged.  Just as you were applying QoS to NICs before CNAs or the VIC you're doing the same thing now with them.  The only difference is now the NICs are multiple and "virtual" with the Cisco VIC.  

Regards,

Robert

Thanks for the comprehensive answer Robert.

If I may I would like to run a scenario by you.

I need the 1000v to apply QoS marking to traffic from VMs - one marking for voice RTP stream and one for voice signalling from the same VM. So would I then be able to utilise the vNICs from the VIC? i.e. have the RTP traffic use one vNIC uplink based on its QoS value that has been applied by the 1000v? and the voice signalling traffic use another vNIC uplink based on its QoS value as assigned by the 1000v?

The problem to resolve is the fact that voice signalling natively uses the same QoS marking - 3 - as FCoE. So one solution is to remark the voice signalling traffic. The other solution is to assign another QoS marking for FCoE. I'm uncertain as to which is the best solution and realise this is a bit off topic for you but will help explain the scenario above.

And just to confirm my understanding. When we carve up the VIC into multiple vNICs and use the native UCS B Series QoS policies, the Nexus 1000v's uplinks will be bandwidth 'monitored' by the VIC? Would an outgo QoS policy need to be configured on the Nexus 1000v on its uplinks to manage the bandwidth?

Thanks again

Nigel

There's a good design doc one of my Colleagues Louis Wattta recommended.  http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/srnd/8x/netstruc.html  This covers Voice application CoS/QoS considerations with UCS. 

This quesiton has come up before, regarding voice and FCoE sharing the same CoS 3 queue, and its been regarded as presenting no issue so far. 

And just  to confirm my understanding. When we carve up the VIC into multiple  vNICs and use the native UCS B Series QoS policies, the Nexus 1000v's  uplinks will be bandwidth 'monitored' by the VIC? Would an outgo QoS  policy need to be configured on the Nexus 1000v on its uplinks to manage  the bandwidth?

Correct.  By applying QoS to the VIC vNIC it will superceed the 1000v marking unless you enable the "Host Control".  So if you apply QoS at the 1000v level it will be honored up to the egress of the VIC vNIC, then the new CoS values will be applied.  I don't have a great deal of expertise in the voice arena, so I would have to defer to the design doc linked above in order to properly advise the best recommendation whether applying QoS at the 1000v or VIC layer.  From initial analysis it looks like you can go with either design.  QoS is a congestion control mechanism.  If you're under light usage you can probably get away with applying QoS just at the UCS vNIC layer.  If you find you're getting contention at the HyperVisor layer (1000v DVS level) then you might find it worthwhile to also apply QoS at the virtual layer to ensure prioritized egress of voice traffic from each 1000v VEM. 

I'll do some more research tomorrow and come back with some additional recommendations.  In the meantime have a read through the QoS sections of the Design doc and we'll pick up on this conversation shortly.

Regards,

Robert

richbarb
Cisco Employee
Cisco Employee

Hello Robert,

I have two questions, if answered will definitely help me a lot.

1) I am trying move service profiles in Cisco UCS between servers B200 M2/M81KR and B200M3/1240&1280, this is a Windows Server 2008 R2 enviroment. The migration is succeded in the UCS, the problem is the configuration in Windows Server 2008 become a completly mess, we have to make all network configuration again.

Well, I already tried fixed the placement order in service profile, but doesn't make effect.

What should I do to deceive the Windows in this case?

2) The Windows uses the same driver for all kind of cisco mezzanines?

Thank you.

Richard

Hey Richard.  Happy to Answer any/all your questions!

For the first problem unfortunately you're at the mercy of how Windows detects hardware.  We can implement a vCon policy to ensure that vNICs/vHBAs are detected in the same order on the PCI bus, but even beyond that there are some intracacies that Windows has when detecting  new/different hardware.  Even though we can assign the same MAC address from old  VIC to new generation VIC, Windows can still have issues with a seemless  transition. 

Rather than re-invent the wheel, I'll point you towards a great post by one of our System Engineers - Jeff Allen. 

http://jeffsaidso.com/2012/07/painless-hardware-upgrades-with-cisco-ucs/   He provides a great explanation of the issue, as well as a workaround -  keep in mind the workaround is not officially supported by TAC.  If  you're going to try it, I'd suggest do it in a controlled lab first,  before testing it on a production system.  As always back your data  before doing anything. 

For your second question, there are different chipsets/ASICs for each generation of VIC.  The first generation (M81KR) known as "Palo" uses a different chipset than the current VIC 1240/1280 which uses the "Sereno" chipset.  Therefore you will have different drivers for each chipset. 

Regards,

Robert

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card