cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2852
Views
30
Helpful
14
Replies

FCoE uplink is down on vsan

Hello, I have a problem connecting nexus switches to fabric interconnects.

I have created vsan's, unified uplinks, vfc interfaces on nexus, vlan to vsan mappings but still I see this fault in the event log.

HBA templates and HBA's are also created.

I have 2 nexus 5548up switches connected to UCS 6324 FI's.

 

 

 

1 Accepted Solution

Accepted Solutions

I hope your vsan 500 has FC zoning "disabled"

Before you setup zoning, your blades won't see any lun's !

Do the zoning on the N5k, and zoneset activation will push back the active zone information to FI.

View solution in original post

14 Replies 14

Walter Dey
VIP Alumni
VIP Alumni

Hi Mikhail

http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-manager/116188-configure-fcoe-00.html

is a good checklist ! have a look, and report if you still have issues.

Walter.

Hi, I have followed this instruction during configuration. But still uplinks are down. 

Sorry ! I overlooked the fact, that you are using min UCS (= FI 6324);

With UCS V3.01 min UCS only supports FC switch, but no FC NPV.

This document refers to FI in FC NPV mode, which you don't have.

Note: The UCS is a N Port Virtualization (NPV) switch by default so the upstream switch needs to be in N Port Identifer Virtulization (NPIV) mode (enter the feature npiv command in order to enable).  See Configuring N Port Virtualization for more information on this feature.

This has been discussed before, see e.g.

https://supportforums.cisco.com/discussion/12436771/please-tell-me-how-connect-fcoe-ucs-mini-and-nexus-n5k-n5k-c5548up-b-s32

Walter.

Walter, I believe this is correct article in my case: http://www.cisco.com/c/en/us/support/docs/switches/nexus-5000-series-switches/116248-configure-fcoe-00.html

So I've done everything as described, including port-channel, but still have the same issue, do I miss something?

I've performed a migration from IBM Servers and FC-switches to UCS and now myb storage is connected via iSCSI, my goal is to make it FcOE connected so I'd have all the benefits of FC in unified fabric. This is my first experience with UCS and FcOE.

Here is what I have on Nexus side for now:

vfc50 is trunking
    Bound interface is port-channel4
    Hardware is Ethernet
    Port WWN is 20:31:8c:60:4f:1f:bd:ff
    Admin port mode is F, trunk mode is on
    snmp link state traps are enabled
    Port mode is TF
    Port vsan is 500
    Trunk vsans (admin allowed and active) (500)
    Trunk vsans (up)                       ()
    Trunk vsans (isolated)                 ()
    Trunk vsans (initializing)             (500)
    1 minute input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
    1 minute output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
      0 frames input, 0 bytes
        0 discards, 0 errors
      0 frames output, 0 bytes
        0 discards, 0 errors

And UCS:

Interface  Vsan   Admin  Admin   Status      Bind                 Oper    Oper
                  Mode   Trunk               Info                 Mode    Speed
                         Mode                                            (Gbps)
-------------------------------------------------------------------------------
vfc765     500    F     on     trunking    Veth8957                 TF   auto
vfc767     500    F     on     trunking    Veth8959                 TF   auto
vfc776     1      E     on     trunking    Po4                      TE   20
FI-A-A(nxos)# sh flogi data
--------------------------------------------------------------------------------
INTERFACE        VSAN    FCID           PORT NAME               NODE NAME
--------------------------------------------------------------------------------
vfc765           500   0x5f0000  20:00:00:25:b5:00:0a:1f 20:00:00:25:b5:00:00:0f
vfc767           500   0x5f0020  20:00:00:25:b5:00:0a:0f 20:00:00:25:b5:00:00:1f

I don't understand why vfc's 765,767 are not binded to Po4.

 

 

 

 

 

Hi Mikhail

The reference is essentially the same as the one I mentioned. It doesn't apply, because UCS mini release 3.01 doesn't support FC NPV mode.

In your case, FI is acting a FC switch, therefore you see the flogi

vfc765     500    F     on     trunking    Veth8957                 TF   auto
vfc767     500    F     on     trunking    Veth8959                 TF   auto

and the ports are F or trunking F ports

You seem to have a port channel between FI and the Nexus 5k

vfc776     1      E     on     trunking    Po4                      TE   20

 

The port is E, however, the port vsan is 1 which is wrong, it should be 500 !! that's why you get this error.

 

Hi Walter,

I have enabled FC uplink trunk trunking on FI and now everything seems correct:

-------------------------------------------------------------------------------
Interface  Vsan   Admin  Admin   Status      Bind                 Oper    Oper
                  Mode   Trunk               Info                 Mode    Speed
                         Mode                                            (Gbps)
-------------------------------------------------------------------------------
vfc765     500    F     on     trunking    Veth8957                 TF   auto
vfc767     500    F     on     trunking    Veth8959                 TF   auto
vfc776     500    E     on     trunking    Po4                      TE   20
 

But membership is still down, and vfc is still in initializing state.

UPDATE

I've switched vfc connected to UCS to E mode and now vsan is up:

vfc50 is trunking
    Bound interface is port-channel4
    Hardware is Ethernet
    Port WWN is 20:31:8c:60:4f:1f:bd:ff
    Admin port mode is E, trunk mode is on
    snmp link state traps are enabled
    Port mode is TE
    Port vsan is 500
    Trunk vsans (admin allowed and active) (500)
    Trunk vsans (up)                       (500)
    Trunk vsans (isolated)                 ()
    Trunk vsans (initializing)             ()
    1 minute input rate 240 bits/sec, 30 bytes/sec, 0 frames/sec
    1 minute output rate 200 bits/sec, 25 bytes/sec, 0 frames/sec
      444 frames input, 55460 bytes
        0 discards, 0 errors
      448 frames output, 53244 bytes
        0 discards, 0 errors
 

But I don't see UCS WWPN's in flogi database.

 

 

 

Did you do on the fabric interconnect: CLI: sho flogi database vsan 500 ?

Have you installed a OS on the blade ?

Yes, I can see HBA's WWPN's

FI-A-A(nxos)# sh flogi data vsan 500
--------------------------------------------------------------------------------
INTERFACE        VSAN    FCID           PORT NAME               NODE NAME
--------------------------------------------------------------------------------
vfc765           500   0x5f0000  20:00:00:25:b5:00:0a:1f 20:00:00:25:b5:00:00:0f
vfc767           500   0x5f0020  20:00:00:25:b5:00:0a:0f 20:00:00:25:b5:00:00:1f

Total number of flogi = 2.

And the OS is ofcourse installed. 

 

Ok ! on the Nexus 5k, do CLI: "show fcns database vsan 500".

This should give you all the devices in vsan 500.

and "show flogi database vsan 500" will show you those which are directly connected to Nexus 5k, and have done flogi there.

Yes, now I see. Can you please tel is this correct output?

VSAN 500:
--------------------------------------------------------------------------
FCID        TYPE  PWWN                    (VENDOR)        FC4-TYPE:FEATURE
--------------------------------------------------------------------------
0x5f0000    N     20:00:00:25:b5:00:0a:1f                 scsi-fcp:init fc-gs
0x5f0020    N     20:00:00:25:b5:00:0a:0f                 scsi-fcp:init fc-gs
0x630000    N     50:05:07:68:03:04:90:d6 (IBM)           scsi-fcp:both

Everything seems ok, but my storage device don't see any WWPN's.

And server HBA's are in unpinned status.

 

Did you do any zoning.

On the Nexus 5K: CLI: show zoneset active vsan 500 ?

No, Zoneset not present.

I hope your vsan 500 has FC zoning "disabled"

Before you setup zoning, your blades won't see any lun's !

Do the zoning on the N5k, and zoneset activation will push back the active zone information to FI.

Walter, thank you very much, everything is working.

Performance is 5! times greater than iSCSI on the same box!

Maybe there is something wrong with IBM iSCSI implementation.

OK, that doesn't matter anymore.

Thank you again.

 

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card