07-20-2013 11:47 AM - edited 03-01-2019 11:09 AM
Dear Team,
Kindly share how to configure connectivity between Fabric Interconnect and MDS
what are the steps to be followed in establishing a successful connection between FI and MDS
We have a tape library from IBM and trying to connect the same to FI via MDS and want this Tape library to be detected on the guest VM running windows 2008 on Esxi( Service profile with 2 vhba already added),we are trying using Fibre connectivity.
Also please advice in which mode should we keep the FI (FC switching,end host etc)
Thanks and Regards
Jose
07-21-2013 11:48 AM
Hi Jose,
Please use the followign link to help you configure UCS with MDS switches.
Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches
http://www.cisco.com/en/US/prod/collateral/ps4159/ps6409/ps5990/white_paper_c11_586100.html
07-21-2013 09:12 PM
Hello Jose,
The below link may help you to connect your FI with MDS
http://www.virtualdatacenter.me/2012/06/fibre-channel-trunking-and-port.html
07-21-2013 10:34 PM
Hi Jose,
Configure the FI on it default mode (End Host Mode or NPV), enable NPIV on the MDS. This configuration will cause MDS to provide all zoning informations while the FI act as the Host with a lot fcid login to the MDS. It means that the zoning configuration just on the MDS. While connected, the MDS will see the FI as host.
Port on MDS connect to the FI must be configured into F port mode, while the FI will become NP port by default. If you want to utilize san port channel between MDS and FI, you must enable fport-channel feature on the MDS. Enable trunking on FI if you want to pass more than one vsan on the link between FI and MDS (MDS become TF port and FI become TNP port).
Thanks
07-24-2013 10:33 PM
Thank you all.. was able to make successful conncetion,
what we did is
Physical connection made from TL to MDS
Physical connection made from MDS to FI
Made the Fabric FC switching mode
Made a FC port on FI and made this port as uplink ( on making this as uplink the light turned green on FI and MDS,before it was red)
npiv enabled on MDS
if i give the command show flogi database,could see the WWPN and WWNN of TL (Drive) and WWNN and WWPN set on the VHBA via Service profile.
Now the query is how do we make the TL(Drive) available to the Windows host running on Esxi with the UCS Service profile having the VHBA whose WWPN and WWNN ID already discovered by MDS
Thanks and Regards
Jose
07-24-2013 11:46 PM
Hi Jose,
Have you configured the zone with the member of ESXi and the drive ? After configure the zoning, ESXi will have their datastores (you can present the datastore on the ESXi).
If you want the windows guest running on the host ESXi to have their own storage, you must configure raw mapping on the windows itself.
Please refer to the document below :
http://www.vmware.com/pdf/esx25_rawdevicemapping.pdf
Thanks,
Gofi
07-26-2013 12:55 AM
Thank you Gofi..
Now we are stuck at a point where in Esxi (ESxi 5.1,B200 M3) configuration storage adapters we see the hba card with its WWNN name but in details under paths it shows "dead" ( we updated drivers on Esxi) but still same
We did zoning using MDS Device Manager and in MDS its showing the WWNN of Tape L ,VHBA and the Fabric Int port (which we made as uplink) ( show flogi database)
and Zoning was done with the TL and VHBA, but still Esxi failes to identify the path
But if we install Windows on this B200 M3 and update the drivers windows will detect the TL though a backup software
but Esxi on B200M3 doest like the path and shows "dead" and no device will be shown
all are in default VSAN 1
Some additional config to be done? as it is between TL->MDS->FI ?
Thanks and Regards
Jose
07-26-2013 01:36 AM
Hi Jose,
The common steps to configure FC on FI and MDS as following :
1. FI configured in NPV mode, ensure that all FC uplinks and vHBA of Blades have been assigned on correct VSAN
Verify if vHBA Flogi into the FI
show npv flogi-table
show npv status
Example
FI-A(nxos)# sh npv flogi-table
--------------------------------------------------------------------------------
SERVER EXTERNAL
INTERFACE VSAN FCID PORT NAME NODE NAME INTERFACE
--------------------------------------------------------------------------------
vfc703 10 0x330001 20:00:00:cc:1e:dc:01:0a 20:00:00:25:b5:00:00:03 San-po102
vfc711 10 0x330002 20:00:00:cc:1e:dc:02:0a 20:00:00:25:b5:00:00:02 San-po102
Total number of flogi = 2.
FI-A(nxos)# sh npv status
npiv is enabled
disruptive load balancing is disabled
External Interfaces:
====================
Interface: san-port-channel 102, State: Up
VSAN: 10, State: Up, FCID: 0x330000
Number of External Interfaces: 1
Server Interfaces:
==================
Interface: vfc703, VSAN: 10, State: Up
Interface: vfc711, VSAN: 10, State: Up
Number of Server Interfaces: 2
2. Enable NPIV mode on MDS, configure the link to FI as F port. Assign this port into correct VSAN
FI will become Node proxy, so verify if all vHBA have been detect on Upstream NPIV switch
SW2(config)# sh npiv status
NPIV is enabled
SW2(config)# sh flogi database
--------------------------------------------------------------------------------
INTERFACE VSAN FCID PORT NAME NODE NAME
--------------------------------------------------------------------------------
San-po102 10 0x330000 24:66:00:2a:6a:05:e9:00 20:0a:00:2a:6a:05:e9:01
San-po102 10 0x330001 20:00:00:cc:1e:dc:01:0a 20:00:00:25:b5:00:00:03
[BLADE1-SAN-A]
San-po102 10 0x330002 20:00:00:cc:1e:dc:02:0a 20:00:00:25:b5:00:00:02
[BLADE2-SAN-A]
3. Configure zoning on the MDS (to make sure whether the issue on zoning or not, you can configure the default zone to permit)
Example : zone default-zone permit vsan 1
Verify the zoning on Upstream NPIV switch
SW2(config)# SH ZONESet active vsan 10
zoneset name ZONESET-VSAN10 vsan 10
zone name INE-VSAN10 vsan 10
* fcid 0x330001 [device-alias BLADE1-SAN-A]
* fcid 0x330002 [device-alias BLADE2-SAN-A]
* fcid 0x2a0000 [device-alias FC-SAN10]
4. Ensure that vHBA of BLADE had been discovered on the storage device. Configure LUN masking on the storage
I really don't have enough knowledge of VMware hahaha.....so can't help for troubleshooting the VM....Really sorry
Thanks,
Gofi
07-31-2013 09:05 PM
Hello All,
Finally sorted out the issue. the path was showing as dead because """the tape drive had no tape in it """ ..
Searched a lot and found in the below blog to inser the tape in drive and check--
http://communities.vmware.com/thread/179768?start=0&tstart=0
"""
Hi, sebam
I guess, your question closed many monthes ago . I had take my issue with IBM tape library (TS3200, connected by firbe channel) when ESXi 3.5 U2 was the most fresh release. I think, my problem was not repeated again up to U5. But now VI3 had been superseded by vSphere almost anywhere in the world. But either anything changed? Nothing!
Look at many posts like this one: http://communities.vmware.com/message/1597444
But there is a one more additional important step ("feature") we should doing to escape "dead" SCSI pathes - try to load a tape in your tape drive (for examle, using your library web management interface) and it is a magic! SCSI pathes will change its state from "dead" to normal! One remark, - until next ESXi reboot...
"""
TL is from IBM
We got this fixed last friday,but wanted to know is this how it has to be ,with tape alive,without tape dead after the next reboot
Thanks and Regards
Jose
08-01-2013 08:58 AM
Hi Jose,
The several reference links seems quite confusing on the affected system version.
Is it only for esx3 and 4?or this "feature " still could be found in esx5?
Thanks !
-Eric
Sent from Cisco Technical Support iPad App
08-01-2013 08:56 PM
Hi Eric,
yes We are using Esx5.1
IBM TL System Storage.
Thanks and Regards
Jose
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide