cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3592
Views
0
Helpful
4
Replies

Catalyst 3020 configuration question in hp c3000 enclosure

kevin.klein
Level 1
Level 1

Ok, here is the scenario - a single C3000 enclosure w/4 BL460c blades (eventuially it will be 8 BL460c blades).  Each blade contains the QLogic iSCSI Dual-Port adapter in Mezz1 and the a Quad-Port NIC in Mezz2.  Furthermore, all four ICBs have a Cisco Catalyst 3020 installed.  I am fully aware the dual Gig/E NICs onboard the BL460c route to ICB1, that the QLogic dual port iSCSI NIC routes to ICB2 and that the QuadPort NIC in Mezz2 routes to both ICB3&4.  Outside of the C3000 there are a pair of ProCurve 2910al switched being utilized specifically for MPIO iSCSI traffic which ties back to a single Dell/EqualLogic PS6000XV SAN w/dual controllers (active/passive) each having 4 Gig/E ports.  Since both iSCSI ports on Mezz1 route to ICB2, in order to provide HA and MPIO, I need to know if it is possible and how to route to both external switches from any given blade through the single 3020 Cisco switch installed in ICB2.  There are 8 external uplink ports but I cannot find anywhere where it is stated how the downlink ports map to the uplink ports and if it is configurable.  I know my single point of failure is the 3020 itself, but the goal barring that as a failed component is to have four of the 8 uplink ports go to one 2910al and the other four uplink ports go to the second 2910al.  From there each 2910al ties in to the dual controllers on the SAN.  I would like iSCSI NIC-1 on any BL460c in Mezz1 to be routed to the first 2910al and iSCSI NIC-2 on any BL460c in Mezz1 to be routed to the second 2910al.  This way if a SAN controller or 2910al switch fails, I still have an active/open iSCSI path from c3000 and all BL460 blades to the SAN.  Can someone please point me to any supporting documentation that would allow this to happen?  Thanks in advance for your help!

4 Replies 4

IAN WHITMORE
Level 4
Level 4

You can find how the uplink/downlink ports map here:

http://www.cisco.com/en/US/docs/switches/blades/3020/hardware/quick/guide/3020GSG2.html

Configuration guide here:

http://www.cisco.com/en/US/docs/switches/blades/3120/software/release/12.2_58_se/configuration/guide/3120_scg.html

NB. This is for the latest IOS, you may have a different version.

I didn't fully understand everything you wrote, it might be better with a diagram explaining how you have it now and what you want to achieve. I don't understand why your 3020 is a single point of failure. If I remember correctly in the HP system there are 3 cisco 3020 for each enclosure, and they all interconnect internally. But maybe my memory isn't so good...

HTH,

Ian

Ian, first and foremost thank you for replying.  The hp c3000 conmtains a total of 4 Interconnect Bays (ICB) in the rear for switch modules or pass-thru patch panels.  Furthermore, there is very specific hard-code routing of Ethernet and Fiber Channel ports from blade (based on either on board NICs or mezzanine position) to ICB.  Specifically the dual Gig/E on board NICs route to whatever switch or pass-thru is installed in ICB 1 while the first Mezzanine Card's ports (dual port max)route to whatever is installed in ICB 2 and finally the second Mezzanine Card's ports route to only ICB 3 if dual port or both ICB 3 and 4 if quad port (as is the case in my configuration).  Everything above assumes the smaller c3000 enclosure and only half-height blades such as the BL460c (as is the case in my configuration) and all four ICBs have a Cisco Catalyst 3020 installed.

Now, focusing in on the specific question/issue...

I have a QLogic dual port Gig/E iSCSI adapter installed into Mezzanine 1 on each BL460c blade server.  That means the iSCSI/Ethernet traffic from that card (both ports) will route to ICB 2 specifically (Cisco 3020 switch).  Outside of the blade center the external SAN is an EqualLogic PS6000 series iSCSI SAN which has dual controllers (one active and one passive); each controller has four Gig/E ports.  Other than the Cisco 3020 switch in ICB 2 we want to eliminate as many possible sources of potential failure as possible.  In between the blade center and SAN we have two HP ProCurve 2910al switches.  Between the SAN and two external HP switches we are configured as such: 2 of 4 ports from the active SAN controller go to ProCurve Switch 1, while the other 2 of 4 ports from the active SAN controller to go ProCurve Switch 2.  The same has been done for the passive controller (2 ports to switch 1 and 2 ports to switch 2).  This way no matter if a switch or SAN controller fails there is an open active path to the SAN.

Now back to the CIsco 3020 switch in ICB 2: We know all of the iSCSI ports from the QLogic iSCSI card in Mezz-1 go to ICB-2 for any given blade and we can even determine how those connections are mapped to the 16 downlink/internal ports on the 3020 switch.  What we do not know and cannot find documented anywhere is how the 16 internal downlink ports map to the 8 available external uplink ports on the 3020.  The goal is to connect that single 3020 using all 8 external ports to both ProCurve 2910al (4 + 4) switches so again, if a switch or SAN controller fails, there is an active path between blade center and SAN.  Since each QLogic iSCSI card (on Mezz-1) is dual port, ultimately we do not want both of those internal ports to route to the same external port on the 3020 because if it did, there would be no way to split the traffic among the two external ProCurve 2910 al switches.  I'd like iSCSI port 1 and iSCSI port 2 to be able to map to 2 different uplink ports on the 3020 if that is possible.  We want to route the traffic so that both iSCSI ports on a Mezz card do not come out of the same physical uplink port.  Does that make sense?

Thanks for explanation. It makes sense. From my experience the 8 external ports don't map to the internal ports. They are just 8 more switch ports. So you could treat them as such and create two etherchannels to each of the HP switches, pass the VLANs etc that you need to over the ehterchannel trunks. I think though you can only use 4 of the 8 for fibre or 6 of the 8 if using copper. Anyway, maybe this doc will help a bit:

http://www.cisco.com/en/US/prod/collateral/switches/ps6746/ps6748/ps6765/design_guide_c07-468192.pdf

We simply configured g0/15 (using fibre gbic) on each switch back to our core 6509 as trunk ports and passed the relevant vlans.

HTH,

Ian

Thanks Ian - I'll pass your info along to the team and hopefully that helps!  I'll let you know...

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: