cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2082
Views
0
Helpful
4
Replies

Interoperability issues between Nexus 5k and HP storageworks (8/20q)

MohaLeen1
Level 1
Level 1

Hello community,

I am trying to get a VM host and a windows server to connect to their storage across a nexus and HP (Qlogic) fabric switch. This is currently having issues with the VM host unable to see the datastores, possibly due to interoperability between Cisco and HP (Qlogic)

I have configured and tested the connectivity using only the cisco nexus and this worked, I then tested it using only the HP fabric switch (HP 8/20q) and this also worked.

However, when using the HP and Cisco Nexus as shown in the attached diagram, things stop working.

The connection is using Native Fibre channel, On the Cisco side I performed the following steps

  • Configured the Nexus with Domain ID 10 and the HP with Domain ID 20.
  • Connected the 2 fabric switches on fc1/48 (Cisco) and port 0 (HP) and confirmed that the ISL came up (E_port 8G), I confirmed connectivity using fcping both ways.
  • I connected the SAN to the Nexus and the servers to the HP
  • Configured VSAN 10
  • Added interfaces fc1/41 to 48 in VSAN 10
  • Created 2 zones ( ESXI and Windows)
  • Added the PWWN for the ESXI server and the MSA2040 to the ESXI zone
  • Added the PWWN for the Windows 2k8 server and MS2040 to the Windows zones
  • Created zoneset (Fabric-A) and added both the above zones in it
  • Activated the FABRIC-A zoneset

The result is that the zones and zoneset are synchronised to the HP switch .I confirmed that I was able to see the servers and SAN WWN in the correct zones on the HP.

From the 8/20q switch I am able to fcping the SAN, Nexus and servers, however the Nexus is only able to fcping the SAN and the HP, it returns a “no response from destination”  when pinging the servers.

I have added the FCID for all the units in the same zones to see if it makes any difference to no avail the result seem to be the same. I have gone through various Nexus/MDS/HP/Qlogic user guides and forums; unfortunately I have not come across any that shows this specific topology.

source for HP user guide is here: http://h20565.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c02256394

I’m attaching the nexus config and partial view of the “show interface brief” showing the fibre channel port status

-------------------------------------------------------------------------------

Interface  Vsan   Admin  Admin   Status          SFP    Oper  Oper   Port

                  Mode   Trunk                          Mode  Speed  Channel

                         Mode                                 (Gbps)

-------------------------------------------------------------------------------

fc1/47     10     auto   on      up               swl    F       8    --

fc1/48     10     auto   on      up               swl    E       8    --

Any help and advice would be greatly appreciated. thanks in advance

1 Accepted Solution

Accepted Solutions

Walter Dey
VIP Alumni
VIP Alumni

It seems that you are not using any interoperability mode on the Nexus. This cannot work.
 

View solution in original post

4 Replies 4

Walter Dey
VIP Alumni
VIP Alumni

It seems that you are not using any interoperability mode on the Nexus. This cannot work.
 

Thanks for the pointer Walter,

I can see there are 3 modes of interop on the MDS (I suspect the Nexus will use the same) I will try each one and see which one will work and feedback the results.

I expect the HP will also need to have a form of interoperability configured? unfortunately HP (Qlogic) seem to have little to no documentation regarding this. Thanks again.

Hi,

Consider placing the Qlogic switch in passthru (NPV) mode and enabling NPIV on the Cisco Nexus.

 

Regards,

David

Hi all, after much reading, Walter Dey provided the hint to put me on the right track. 

By default the Nexus 5k is in interop mode 1. However, one of the requirement for this to be "interoperable" with other vendor the FCDomain ID in the entire fabric needs to be between 97 and 127 as stated in the Cisco website.

http://www.cisco.com/en/US/docs/storage/san_switches/mds9000/interoperability/guide/ICG_test.html

Another issue that had me and my colleague scratching our heads, was that we were seeing high level of CRC errors on the ISL interfaces. This was caused by ARBFF settings mismatch between the Nexus and the HP. This was resolved by ensuring that the ARBFF setting on the HP was set to false and the command "switchport fill-pattern ARBFF speed 8000" is configured on the ISL interface linking the 2 switches. (note that Cisco's default setting for the ports is IDLE, until this is changed the link will not stabilise)

Thanks for all your help guys.

Review Cisco Networking for a $25 gift card