Showing results for 
Search instead for 
Did you mean: 

Ask the Experts: Design, Plan, Configure, Implement, and Troubleshoot Fibre Channel over Ethernet (FCoE)

Community Manager
Community Manager

Design, Plan, Configure, Implement, and Troubleshoot Fibre Channel over Ethernet FCoE with Ozden KarakokWith Ozden Karakok

Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn how to design, plan, configure, implement, and troubleshoot Fibre Channel over Ethernet (FCoE) with Cisco expert Ozden Karakok. FCoE is an encapsulation of Fibre Channel frames over Ethernet networks. FCoE allows you to create flexible, agile converged networks at the edge and core for multihop FCoE with fast, high-density Cisco Nexus 7000 Series and Cisco Nexus 5000 Series access switches. These switches support multiple Ethernet storage protocols, offering superior investment protection for enterprise and virtualization environments and cloud-ready data centers. Consolidate, scale, and save using multihop FCoE.

Ozden Karakok is a technical leader for the Customer Advanced Engineering Team in the Global Technical Center in Europe, supporting data center and unified computing solutions through product testing and early field trials. She has been with Cisco for more than 14 years, specializing in storage area and data center networks. She previously worked as an escalation engineer in the Cisco Storage Technical Assistance Center. Karakok graduated from Bosphorus University and holds CCIE certification 6331 in Routing and Switching, SNA/IP, and Storage Networking.

Remember to use the rating system to let Ozden know if you have received an adequate response.

Ozden might not be able to answer each question because of the volume expected during this event. Remember that you can continue the conversation on the Data Center community Unified Computing subcommunity shortly after the event. This event lasts through Friday, May 17, 2013. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

17 Replies 17

Luke Akpor


I am searching for the link to register for this webiner.

Can you guide me please?


Hi Luke,

You don't need to register any event, you can ask your FCoE queries in this discussion.

If you are looking for an online event for FCoE deployments, I recently presented in "Partner Interactive Webinars" and here are the recordings of the event:



And previous recordings of PIW events can be accessed from here: {you may need to login with your CCO ID}

Also in DC CCNA series we recently talked about FCoE implementations as well: lok for "Getting Started With FCoE Protocols" in below link

Please do let us know if you have further queries.



Thanks Ozden,

This will be really useful.


Jignesh Desai

Is this the right forum to ask about multipathing in UCS?  I wanted to understand how it works and if this helps with haveing two active links for SAN traffic.


Hi Jignesh,

Multipathing on UCS depends on host operating system, the multipathing software and the array.(active/active or active/passive) Generally speaking multipathing software uses all paths for active/active arrays and only paths to the primary SP in the case of the active/passive arrays. Most vendor-supplied multipathing software works in tandem with the array to select the best path(s).

You may want to have a look at the below UCS whitepaper, the SAN connectivity options/concepts were explained in details: {the SAN concepts are still same in the latest UCSM release, for FCoE we can talk about different connectivity options, in general we still have SAN A - SAN B isolation}

And in practical we have a white paper on"UCS iSCI boot and Fabric Failover":(showing how to use host based multipathing drivers for all load balancing and failure handling)

There was an earlier discussion on SAN multipathing in the support forums, might be useful:

There is a small FC section on "UCS deep dive" session:

And my collegue Craig covered all UCS SAN options in BRKCOM-2002 session in Melbourne Cisco Live:

If you have a specific scenario, we can try to discuss it as well.

I hope this helps.




Ozden, I have a Dell m1000e blade server chassis that is using a Dell Force10 IO Aggregator (IOA) as the server enclosure's I/O modules in the back. Its like a Cisco STP, but it does support local switching....The IOA also supports FCoE transit, DCB, and FIP Snooping. As I said, it's like a FEX.

We are uplinking the IOAs to a Cisco Nexus 5548. We want to deploy the 5548 in NPV mode and connect it to a fibre channel switch that will be in full fabric mode.

Let's focus on the native FC ports that we will connect between the Nexus and the FC switch...ok?

OK...the FC ports on the 5548 will be considered NP ports, as the 5548 is in NPV mode. Those NP ports will log into the FC fabric first with a FLOGI in the usual manner, get assigned an FCID and so on....

The 5548 will then allow the N_Ports connected to it (meaning the server CNAs) to register their WWPNs, log into the fabric (as FDISCs) and get assigned an FCID, and so on...

OK, here is the question...

We know that each FC link between the 5548 and the FC switch can support a theoretical maximum of 256 WWPNs. This is a T11 standard maximum, but I'm not sure if Cisco supports fewer (or more, for that matter). So, to calculate the number of FC links I will need between the 5548 and the FC switch, one of the things I must take into consideration is the number of WWPNs that the blade chassis will present to the 5548.

The blade chassis will consist of 16 ESXi hosts with a 20:1 consolidation ratio (20 VMs per host). That tells me that the 5548 should expect to perform fabric logins for each vWWPN for each VM. That means I should assume that there will be about 320 vWWNs from the guest OSs plus the 16 WWPNs from each physical CNA for which the 5548 must provide fabric logins.

Or should I???

Am I correct in counting the vWWNs that the 5548 must support OR is it the case that the CNA will hide or mask the vWWNs sitting behind them and therefore I only need to consider the physical WWNs from the CNAs when I calculate the number of FC links?

Sheeeew...sorry, I know that was long....hope I was clear...thank you!!

Hi Visitor68,

From your explanation I understood that you needed to run nested NPIV on N5548. I am not sure about Dell IOA configuration and I believe you already took care of that piece of the puzzle. For counting the # of pWWNs you should consider all those virtual VMs (16 * 20) since each and every VM will be represented by a single entity(pWWN) while you are configuring your zoning on the upstream FC switch. So you will have (16*20) + (16 CNAs) ~ 336 FLOGIs/FDISCs minimum. 

Important thing that you will need to keep in mind is # of FLOGIs or FDISCs limit on N5548, the verified limit is 180 with the latest NX-OS 6.0(2)N1.1. Please have a look at the verified scalability limits:

In this case you need to consider minimum 2 uplink ports to the NPV core switch (FC switch where you will enable NPIV) .

I hope this helps.



Hi, Ozden. Thank you kindly for your response. I did some research over the weekend and I have reached a different conclusion. Thoughts?

If you read any white paper on NPIV, the author will drone on about how NPIV allows a VM to log into the fabric as its own entity, get an FCID, and be able to get zoned based on its vWWN. That’s true, but that’s not the whole picture. Apparently, that is only true if Raw Device Mappings (RDM) are used, in which case a VM is directly mapped to a storage LUN on the SAN. Otherwise, what happens is the ESX server’s CNA is mapped to a LUN based on its WWPN and/or WWNN (name-based Zoning), while the VMs have access only to the hypervisor. The hypervisor will sort of proxy to the VM’s .vmdk file that is sitting on the VMFS data store for that particular ESX host. So, one only has to really manage FC connectivity, zoning, etc, for a single ESX host and not all the VMs that sit on it. So, when figuring out the number of WWNs, it's only the physical WWNs one has to worry about.

So, in short:

The VMs don’t have access to any LUNs -- only the hypervisor.

The virtualization admin configures datastores, which are placed in a LUN.

The VM’s virtual disk then resides as a .vmdk file in the chosen datastore.

There is no differentiation per se with VMs on the same datastore. That’s where Storage vMotion/DRS can move a VMs to a different datastore if necessary to gain better performance.

In the case of using HBA with NPIV (assigning individual WWNs to each VM), you will need to use Raw Device Mappings disks. And in that case the FLOGIs or FDISCs limit that I mentioned in my previous response will apply. But if you will not be utilizing NPIV on the HBA and would like to benefit from VMDK and VMFS, you will only need to worry about real pWWNs of the physical HBAs since those are the ones that will login to the Fabric.

Thanks for clarifying your question.



Hi Ozden,

I appreciate and thank you for this chance to ask real expert for advice. I'm pretty new in data center environment and would like to know what is the difference between FCoE and DCB, i.e. isn't it true that every FCoE include DCB enhancements?



Hi Tenaro,

It is great to have you in this session.

FCoE requires specific Ethernet capabilities to be implemented such as lossless switches/ fabrics (IEEE 802.3 Pause), jumbo frames. FCoE utilizes IEEE 802.1 advance capabilities and this set of functions is called DCB - Data Center Bridging. These DCB extensions should be considered in order to make FCoE reliable and easy to use.

The main three standards that make up DCB are:

Priority-based Flow Control (PFC) — IEEE 802.1Qbb provides a link level flow control mechanism that can be controlled independently for each Class of Service (CoS), as defined by 802.1p. The goal of this mechanism is to ensure zero loss under congestion in DCB networks.

Enhanced Transmission Selection (ETS) — EEE 802.1Qaz provides a common management framework for assignment of bandwidth to 802.1p CoS-based traffic classes and guarantees that amount of bandwidth for the specified Class of Service.

Data Center Bridging Capability eXchange Protocol (DCBX) — A discovery and capability exchange protocol used for conveying capabilities and configuration of the above features between neighbors to ensure consistent configuration across the network. This protocol leverages functionality provided by LLDP -Link Layer Discovery Protocol.

Here is the DCB Protocol tree:

Short and very recent article from SNIA about DCB if you want to learn a bit more:

I hope this helps.



I was able to find many documents describing how DCB enhances FCoE, just as you pointed out. What I wasn't able to find is FCoE implementation without DCB enhancements. Are you aware of any such implementation, i.e. device that supports FCoE but doesn't support DCB enhancements?

Additionally, can you tell us if Nexus 3000 series supports FCoE? I'm actually trying to figure out what is the most important difference between Nexus 3k and 5k series.

Hi Tenaro,

I am sorry that I am not aware of a device that supports FCoE but doesn't support DCB enhancements. It will be nice if you can share your research result with us when you find the answer or maybe open your design requirement a bit more.

Nexus 3000 platform doesn't support FCoE, for FCoE support (as you know) you need to check Nexus 55XX, MDS 95XX, Nexus 7000 and UCS platforms. Nexus 3000 is designed for ultra-low latency environments. Nexus 5000 is for all general Data Center access environments and it has features like FEX support, FCoE, adapter-FEX, VM-FEX etc...

Just for your reference I am posting the link for datasheets for our DC platform:

I hope this helps.




Will FCoE work on 40G or 100G Ethernet interfaces? What about QSFP+ where we have 4 times 10G?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Recognize Your Peers