10-21-2011 01:27 PM - edited 03-01-2019 07:01 AM
With Mike Frase
Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn about Fibre Channel over Ethernet (FCoE) and High Availability for Data and Storage Protocols with Cisco expert Mike Frase. Mike Frase has been with the Cisco Technical Support group for 17 years working in the Customer Assurance Engineering (CAE) team at Cisco. During the past five years, his focus has been on data center technologies, including 10 gigabit Ethernet, in deployments of Cisco Unified Computing System and NX-OS in Cisco Unified Fabric. Frase holds CCIE #1189 certification in Storage Networking.
Remember to use the rating system to let Mike know if you have received an adequate response.
Mike might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Center sub-community discussion forum shortly after the event. This event lasts through November 4, 2011. Visit this forum often to view responses to your questions and the questions of other community members.
10-26-2011 08:52 AM
When deploying FCOE utilizing Nexus switches is it recommended to utilize VPC to a host vs traditional NIC teaming? Specifically, lets say we have a Nexus 5596UP, 2232PP, Qlogic 8242, and a VMware host running 4.x. I have seen alot of conflicting documentation online and also depending who you ask at Cisco you will get a different answers! Below are some very good discusions and/or references on this topic. Thanks in advance for your help.
10-26-2011 10:36 AM
I believe what is missing in the reference topics is the question of what your requirements are for convergence when there are port failures and how this effects STP.
vPC allow users to build an L2 network with no limitations and downside concerning the STP protocol. It offers high resiliency while preserving the benefits of physical port channels, such as fast failover time and increasing bandwidth.
One main advantage of the vPC is that it leverages the port channel convergence mechanism rather than the STP protocol in the case of vPC member port failure. Unlike the STP, when there is a device or link failure within a vPC domain, there is no protocol packet exchange or re-convergence within the network. The state change and convergence are rather contained within the vPC domain, between the vPC peer switches.
Hope this brings another aspect to the network design thought.
10-26-2011 10:55 AM
Thanks for the quick response. I am aware of the advantages of VPC with respect to convergence, especially when dealing with switch to switch connections. However, I was told by Cisco that the traditional NIC teaming in vmware is active/active and that it should meet the requirements. I'm not too familar with Vmware, but I read Brad's post below and he points out the advantages you still have with VPC
Brad's post makes sense to me and I thought VPC was the way to go, but was advised by Cisco that figure 22 (Cisco Nexus 2232PP with straight through VPC) isn't considered best practice, but is supported. In short, the benefits gained aren't worth the added complexities or risks you get when adding VPC into the equation with FCOE.
What would you consider best practice? Or can you elaborate on any known risks or complexities?
!!Figure 22 in the link below.
!!Exerpt from the blog...
In cases where you have virtualization hosts (VMware for example), you can obtain active/active forwarding without vPC because some virtual machines will be pinned to one link, while other virtual machines are pinned to the other link, and failing over to the remaining link if one link fails. Keep in mind that if one link fails here the virtualization host will need to alert the upstream network that the affected virtual machines are now using a new link. This is accomplished through a gratuitous ARP message, as many are required (one message for each VM).
However, vPC to a virtualization host does provide benefits too in that each individual virtual machine can use each physical link, rather than just one. Furthermore, when one of those links fail the virtualization host does not need to alert the upstream network with gratuitous ARP messages, because the single logical link created by vPC remains in tact. The network topology did not change despite a link failure. vPC hides link failures from the network forwarding tables. Hence with vPC to a virtualization host the bandwidth of all links is better utilized and link failures are handled much more efficiently (overall fewer moving parts).
10-26-2011 11:37 AM
The straight through vPC is only consideration and Best Practice for FCoE requirements all due to the A-B SAN isolation needs. This requirement will change soon with next N5K release with use of Enhanced vPC. This will then also help with orphan port concerns. These are the 2 design risks most discussed.
10-26-2011 11:55 AM
Thanks again Mike.
10-28-2011 08:42 AM
Hi , i have a question can some one help me out , i have got a 3750x 48 port swith but it is showing an boot loading error.
e-universalk9-mz.122-53.SE2/c3750e-universalk9-mz.122-53.SE2.bin: no such file or directory
Error loading "flash:/c3750e-universalk9-mz.122-53.SE2/c3750e-universalk9-mz.122-53.SE2.bin"
Interrupt within 5 seconds to abort boot process.
Boot process failed...
10-28-2011 09:07 AM
Looks like c3750e-universalk9-mz.122-53.SE2 and/or the .bin image is not in flash:/ , corrupt or missing.
You may need to do recovery process on this switch:
10-28-2011 12:36 PM
How do FCoE interoperate with Trill / Fabricpath ? Is it possible to carry FCoE over fabripath (if not, will it be possible someday ?) Or will it stay mandatory to dedicate links for FCoE separated to a trill-based fabric ?
10-28-2011 02:02 PM
In today's shipping solutions we have the concern of A-B fabric separation within the core. Fabricpath would certainly converge the the 2 SANs onto a shared link with the 2 distinct fabrics(VSANs) so this is not there today. Short answer , no not today, but yes to future, no committed dates. This is a large change for customers (no air gapped SAN A/B fabrics) and we are still trying to scope the acceptance of this change and then map it to a timeline for feature support
10-29-2011 08:31 AM
I'm not sure SAN guys will accept to get 1 fabric instead of two. It would be a major change regarding SAN redundancy paradigms.
10-31-2011 07:17 AM
Yes, this will be a fundamental change where the separation between the 2 SANs will still be, but only virtual.
11-02-2011 02:05 PM
which Nexus products support FCoE?
11-02-2011 02:31 PM
Mike may correct me but :
Nexus 7k with F1 and F2 cards.
11-03-2011 06:34 AM
Suray is correct, also add
MDS 9000 10 GigE 8 port FCoE Mod DS-X9708-K9
Nexus 4000 - Switch for IBM H-class chassis
Nexus B22 FEX for the HP C-class chassis
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: