cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8987
Views
124
Helpful
20
Replies

Ask the Expert: Design, Configure, Implement, and Troubleshoot Fibre Channel over Ethernet (FCoE) and Cisco MDS 9000 SAN Switches with Cisco Press Authors

ciscomoderator
Community Manager
Community Manager
 

This is an opportunity to learn and ask questions about how to design, configure, implement, and troubleshoot Fibre Channel over Ethernet (FCoE) and Cisco MDS 9000 SAN Switches with Cisco expert Ozden Karakok and David Klebanov.

FCoE is an encapsulation of Fibre Channel frames over Ethernet networks. FCoE allows you to create flexible, agile converged networks at the edge and core for multihop FCoE with fast, high-density Cisco Nexus 7000, Cisco MDS 9000, Cisco Nexus 6000, Cisco Nexus 5000 Series switches and Cisco Unified Computing Product Family.

Cisco Storage Networking Solutions covers Cisco MDS 9000 Series multilayer director and multiservice switches.

Monday, March 2nd, 2015 to Friday, March 13th, 2015

Ask your Questions during this two-week, open discussion thread!

Ozden Karakok is a Technical Leader from the Data Center products and technologies team in Technical Assistant Center (TAC). Ozden has been with Cisco Systems for fifteen years and specializes in Storage Area and Data Center Networks. Prior to joining Cisco, Ozden spent five years working for a number of Cisco's large customers in various telecommunications roles. Ozden is a Cisco Certified Internetwork Expert (CCIE No.6331) in Routing and Switching, SNA/IP, Storage. She holds VCP, and ITIL certifications. She is a frequent speaker at Cisco and data center events, she holds a degree in computer engineering from Istanbul Bogazici University. Recently she is working on Application Centric Infrastructure (ACI) and enjoying being a mother of two wonderful kids.

David Klebanov is a Technical Solutions Architect with Cisco Systems. David has over 15 years of diverse industry experience architecting and deploying complex network environments. In his work David influences strategic development of the industry leading Data Center switching platforms, which lay foundation for the next generation data center fabrics. David also takes great pride in speaking at industry events, releasing publications and working on patents. David is CCIE No.13791 Routing and Switching certified.  You can follow David on Twitter at @DavidKlebanov

 

Ozden and David are collaborative coauthors ON a series of Cisco Press Books

CCNA Data Center DCICT 640-916 Official Cert Guide

Published Mar 13, 2015

CCNA Data Center DCICT 640-916 Official Cert Guide Premium Edition eBook and Practice Test

Published Mar 4, 2015

CCNA Data Center Official Cert Guide Library

Published Mar 20, 2015

 

@CiscoPress  @Cisco_Support

 

Find other  https://supportforums.cisco.com/expert-corner/knowledge-sharing.

**Ratings Encourage Participation! **
Please be sure to rate the Answers to Questions

20 Replies 20

Hello Daniel,

Thank you for your question. Unified Fabric approach consolidates network and storage traffic over single unified infrastructure. As today's Data Center fabrics grow in size by adding sometimes significant amount of Top of Rack switches, there is a need to adequetly accommodate the growth of the FCoE environment as well. 

Features, such as FCoE-NPV, allow building scaled out multi-hop unified fabric topologies, which adhere to SAN and Ethernet best practices alike. More specifically, in FCoE multi-hop topology there are access layer and aggregation/core layer switches. Access layer switches connect to initiators or target, while aggregation/core layer switches provide aggregation points for storage connectivity. With FCoE-NPV, access layer switches are put in "proxy" mode, where they neither participate in Fibre Channel switching nor process Fibre Channel logins (FLOGIs). In this mode they also don't consume Domain IDs or perform zoning, which are often times the most significant contributors to allowing overall larger scale.

Aggregation/core layer switches run FCoE-NPIV, which allows them allocate multiple Fibre Channel IDs (FCIDs) over a single physical interface (the one connected to the access layer switch running in FCoE-NPV mode). 

Cisco Nexus 5K/6K and UCS Fabric Interconnect can operate in FCoE-NPV mode, while Nexus 7K and MDS 9700 are likely FCoE-NPIV devices.

You can learn more about FCoE-NPV by reading through the latest configuration guide for the Nexus 5600 switches or reading our CCNA Data Center 640-916 Certification Guide, when it becomes available in a few days.

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5600/sw/fcoe/7x/b_5600_FCoE_Config_7x.html

http://www.ciscopress.com/store/ccna-data-center-dcict-640-916-official-cert-guide-9781587144226

 

Hope that helps!

David Klebanov

@DavidKlebanov

 

 

Twitter: @DavidKlebanov

Mariana Llamas
Level 1
Level 1

What is the difference between NPV and NPIV?  And which Cisco Products work best with each of them?
 

Hi Mariana,

It is nice to have you in this forum and thanks for your question.

Let's first try to understand the definition of each feature.

N-Port ID Virtualization (NPIV)
NPIV allows a Fibre Channel host connection or N-Port to be assigned multiple N-Port IDs or Fibre Channel IDs (FCID) over a single link. All FCIDs assigned can now be managed on a Fibre Channel fabric as unique entities on the same physical host. Different applications can be used in conjunction with NPIV. In a virtual machine environment where many host operating systems or applications are running on a physical host, each virtual machine can now be managed independently from zoning, aliasing, and security perspectives.


N-Port Virtualizer (NPV)
An extension to NPIV is the N-Port Virtualizer feature. The N-Port Virtualizer feature allows the blade switch or top-of-rack fabric device to behave as an NPIV-based host bus adapter (HBA) to the core Fibre Channel director. The device aggregates the locally connected host ports or N-Ports into one or more uplinks (pseudo-interswitch links) to the core switches. Whereas NPIV is primarily a host-based solution, NPV is primarily a switch-based technology. It is designed to reduce switch management and overhead in larger SAN deployments. Consider that every Fibre Channel switch in a fabric needs a different domain ID, and that the total number of domain IDs in a fabric is limited. In some cases, this limit can be fairly low depending upon the devices attached to the fabric. The problem, though, is that you often need to add Fibre Channel switches to scale the size of your fabric. There is, therefore, an inherent conflict between trying to reduce the overall number of switches to keep the domain ID count low while also needing to add switches to have a sufficiently high port count. NPV is intended to address this problem.


Cisco MDS 9000 NX-OS supports industry-standard N-port identifier virtualization (NPIV), which allows multiple N-port fabric logins concurrently on a single physical Fibre Channel link. HBAs that support NPIV can help improve SAN security by enabling configuration of zoning and port security independently for each virtual machine (OS partition) on a host. In addition to being useful for server connections, NPIV is beneficial for connectivity between core and edge SAN switches.
NPV is a complementary feature that reduces the number of Fibre Channel domain IDs in core-edge SANs. Cisco MDS 9000 family fabric switches operating in the NPV mode do not join a fabric; they just pass traffic between core switch links and end devices, which eliminates the domain IDs for these switches. NPIV is used by edge switches in the NPV mode to log in to multiple end devices that share a link to the core switch.


Same NPV logic can be applied to FCoE-NPV. At the end FCoE is FC. And David explained FCoE NPV during our webcast yesterday https://learningnetwork.cisco.com/community/learning_center/meet_authors@ I am just posting the recording of that section here https://learningnetwork.cisco.com/docs/DOC-26425

Here is some visual SAN topology diagrams that can help you to understand the concept better.

We can summarize NPV and NPIV feature capabilities for Cisco Datacenter platform as below:

 

Cisco Data Center Platform

NPIV

NPV

FCoE NPV

Cisco MDS 9700 Series Director Switches

Yes

-

-

Cisco MDS 9500 Series Director Switches

Yes

-

-

Cisco MDS 9250i

Yes

-

-

Cisco MDS 9222i

Yes

-

-

Cisco MDS 9148

Yes

Yes

-

Cisco MDS 9148S

Yes

Yes

-

Cisco MDS Blade Switches

Yes

Yes

-

Cisco Nexus 9000 Director and 9300 Switches

-

-

-

Cisco Nexus 7000 Director Switches

Yes

-

-

Cisco Nexus 7700 Director Switches

Yes

-

-

Cisco Nexus 6004

Yes

Yes

Yes

Cisco Nexus 5600

Yes

Yes

Yes

Cisco Nexus 5500

Yes

Yes

Yes

Cisco UCS FI 6248UP – 6296UP

Yes

Yes

Yes

Cisco UCS FI 6120XP – 6140XP

Yes

Yes

Yes

I hope this helps.

Thanks,

Ozden

Hi again,

if Cisco ACI is used while designing new data center, do you have any recommendations for fibre channel traffic? I'm just trying to understand if it is always one same approach for fibre channel, no matter what is planned for ethernet side (standard switching mode or spine - leaf combination).

 

Thanks,

Tenaro

Hello Tenaro,

Today you cannot run Fibre Channel or FCoE over Cisco ACI. If you are using traditional Fibre Channel, you will be running it over a separate physical and SAN switching infrastructure. If you are using FCoE, there are options. For example, if you leverage Cisco UCS B-series blade servers, you can connect UCS Fabric Interconnect to your ACI leaf node for network and/or IP storage connectivity, and to upstream SAN switching infrastructure for either FC or FCoE storage access. In this case FCoE traffic arriving from the blade server to the Fabric Interconnect will be split with network and/or IP storage traffic being forwarded to ACI and FC/FCoE traffic to the SAN switching infrastructure. Something like this:

 

 

I hope that clarifies things for you!


Thank you,

David Klebanov

@DavidKlebanov

Twitter: @DavidKlebanov

I am new with Cisco MDS. I have a question, how to configure zoning with NPIV enabled. Because our ESXi are connected on Cisco MDS and there are multiple VMs running on our ESXi. Thanks! 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: