Does the Nexus 93180YC-FX NX-OS switch already support Fiber Channel interfaces?
- The datasheet says, that the downlink ports are FC capable
- The NX-OS configuration guide does not mention Fiber Channel interface configuration, only FCoE configuration chapter is included.
- Also the release notes only mentions FCoE config.
What Fiber Channel SFP transceivers are currently supported?
I said it already, without success !
The 9k platform will most likely never get full FC functionality (Cisco PM, please confirm or correct if I am wrong).
I know this is not the question, but ACI doesn't support lossless Ethernet end to end, therefore not even FCoE is supported end to end, but only at the edge; same applies to classical FC.
Plus, be aware, that even the FC functionality of N5k is not exactly identical to a MDS !
I agree with Walter the N9K will only ever support FCoE and NPV not the full fibre channel protocol suite. For that you’ll need to place a MDS SAN switch for Fibre Channel and use the N9K for Ethernet only.
Doing zoning on UCS FI requires "FC switching mode" on FI; zoning is done by means of UCS service profile specifications; you cannot take full control over all the FC paramenter settings.
As I mentioned many times: FC implementation on UCS FI is not 100% compliant with MDS SAN-OS.
Q. who in your oranganisation does FC zoning ? the storage team or the server team. If you have a storage team, they will not like the automatic FC zoning done by UCS service profiles.
"As I mentioned many times: FC implementation on UCS FI is not 100% compliant with MDS SAN-OS."
Does any (partner) documentation exists on the differences between FI and MDS FC code?
I've would really like to hear some hard technical arguments for why not to use FI in FC switching mode when running smaller installations or using a POD design.
Don't think there is a document; I remember Features like
- static Domain id's
- smart zoning
are not supported. And zoning is automated by the SP creation; I think there is no way to impose your own zone / zoneset naming conventions.
Of course, in a PoC, small lab Environment, this might be ok, but I would never recommend it for a Enterprise class Environment.
My own statistics says, that > 95% of all UCS Environments run FC End host mode (ask Cisco, what their guess is).
btw. I had customers who run FC switch mode, and then later migrated to End host mode, which is disruptive and ugly.
Thanks for the answers.
I can't really see any features or configuration limits that would disqualify running FI's in switch mode when using either a direct attaching storage system or separate front end ports on larger storage systems (VMAX / UCP). But I would love to hear otherwise :-)
Sure the numbers of VSANs, zones, FLOGI's, buffer credits are not impressive compared to the smallest MDS models but when having maximum 160 server it wouldn't be a problem. Having the shortest and most predictable path to storage and simple setup is a big plus in my eyes.
Have seen a lot of cases, especially now with all-flash, where customers has a mix of other/older servers attached to FC switches where different port speeds cause loss of buffer credits - creating back pressure. And the lack of standard for FC port-channels cause link congestions between FI's and Brocade switches and being hard to troubleshoot because of microburst congestion not being caught by monitoring tools.
Zoning have never been the most loved task for storage(/server) people so having UCSM take care os this makes day2day operations a lot easier to perform and troubleshoot.
If connectivity to other equipment or site is needed then separate VSANs pinned to other uplinks can be used, again to separate traffic and make performance predictable.
The dual FI reboot when changing from end host mode to switch mode and the other way around was fixed in 3.1(2b), check CSCuy20188.
It is now also possible to create and exchange based load-balanced port-channel between UCS FI and Brocade. Does require a Brocade Director switch with the new FCoE linecard though.
We are also facing the problem. FC links (to servers and targets) won't come up on 93180YC-FX, and they show Status: npmExtLinkDown (they are in F mode).
Is there any way to connect servers and targets together, via a single Nexus 93180YC-FX, using FC interfaces (we have 8GB SPFs for servers and 16GB for Targets) and make it switch FC traffic?
NPV devices require upstream NPIV devices to provide FCIDs, flogi, etc
This switch acts as a NPV device, not as a NPIV device.
You will still need an upstream MDS, N5K, etc, in tandem.
We have finally made it!
The NX-OS 9.3(3) contains the full FC/FCoE switching capability:
Enjoy but please be aware of the 16/32Gb speeds only for native FC (no 8Gb or less)!
Hi, there were no full FC support, until recently. We even managed to convince our supplier to provide us a pair of MDS switches for free to achive what the 93180YC-FX calimed supported. What was supported - an NPV only mode.
Recently I've heard they at last implemented it, so try to update to the newest version.
the FC switching has been added since NX-OS 9.3(3). Please see release notes:
The config guide fo FC switching is here:
that is exactly what it means. Nexus 93180YC-FX is able to perform all the FC SAN switch functions on its own.
Please be aware of the licensing requirements as noted in the configuration guide: