A virtual network interface card (vNIC) or virtual host bus adapter (vHBA) logically connects a virtual machine to a virtual interface on the UCS 6100/6200 series fabric interconnect and allows the virtual machine to send and receive traffic through that interface. Each vNIC in a virtual machine corresponds to a virtual interface in the fabric interconnect. This is accomplished by using both Ethernet and Fibre Channel end-host mode and pinning the MAC addresses and World Wide Names for both physical and virtual servers at the interconnect uplink interfaces. To implement this Cisco has developed a hardware approach based on the concept of network interface virtualization (NIV). In NIV any switching function is performed by the hardware network switch. However the host adapters on the servers run the interface virtualizer. The purpose of interface virtualizer is to tag each of the packets with a unique tag, known as a virtual network tag (VNTag) for outgoing traffic and to remove the VNTag and direct the packet to the specified vNIC for the incoming traffic.
Creating vNIC and vHBA
The vHBAs & vNICs created depend on the type of the Mezz card used. For example, for UCS CNA M71KR you can create maximum 2 vNICs and 2 vHBAs; and for Cisco UCS 82598KR you can create maximum 1 vNIC. This part is done when Service profiles are created for the server. These vNICs are presented as physical NICs to the OS running on the blade. vNIC and vHBA are created and assigned via service profiles to the server blades. Remember that while a vNIC can support multiple VLANs, a vHBA can support only one VSAN.
If a vNIC supports VLANs which are not the "Default Network" for that particular vNIC, traffic for those VLANs will come through to the NIC with VLAN tagging intact. This NIC must then be configured in its OS as VLAN-aware.
When the UCS 6100 series fabric interconnect is running in End-host mode it acts as an end host to the network, representing all server (hosts) connected to it through vNICs. This is achieved by pinning (either dynamically pinned or hard pinned) vNICs to uplink ports, which provides redundancy toward the network, and makes the uplink ports appear as server ports to the rest of the fabric. Remember that when the UCS is running in either End-host mode or switching mode, even when vNICs are hard pinned to uplink ports, all server-to-server unicast traffic in the server array is sent only through the fabric interconnect and is never sent through uplink ports. Server-to-server multicast and broadcast traffic is sent through all uplink ports in the same VLAN.
Hello Folks, After reading the multi pod install guide from Cisco I was still puzzled at when to initialise the other APICs. We have two APICs in both data centres. We intend one to be standby in Pod 2. My main confusion on best practice is when...
On my cisco nexus switches i keep seeing this in the Debug for BGP.
Peer has pending data on socket during recv, extending expiry timer.
The information on the internet for this message is sparse at best. Does anyone have any idea what...
We have a OTV environment ASR 1000 where my Host/laptop NIC has to set to MTU 1458-bytes then only it will work, i.e
1. PING with more 1500-bytes (the maximum size is 1472)
2. Transfer large file size (1Gb file is within 20sec)
We have se...
I have a problem with +500byte ICMP packet, I am pinging from end-points residing outside ACI arriving in two different L3OUT.The ping suceed but in the pcap caputer from the APIC I see the followingTotal Length: 528 - [Expert Info (Error/Proto...
I need info concerning how to wire a dc cable for ASR9000 V-2, dc power shelf . I need a photo of the DC DSUB 2 POLE CONNECTOR our any other info as to make clear where the - 48VDc wire is connected at the panel connector . At the ...