06-14-2018 11:17 AM - edited 03-01-2019 01:35 PM
Hello All,
I have 2 6332 -16UP Fabric Interconnects with 3 chassis connected. I am trying to connect a new C240 M4 to my fabric interconnect via a 40gb port. The port has a fault because the chassis/FEX Discovery Global Policy is set to 4 Link. The new C240 I am trying to add only has 1 link. So my question is if I change the policy to 1-Link Discovery Policy will this cause a disruption? AS well my current chassis have 4 connections will this cause disruption or loss of service?
Thanks,
Scott
06-14-2018 11:42 AM - edited 06-14-2018 12:13 PM
That policy does not impact rack server discovery.
I would not adjust that in troubleshooting your C series integration troubleshooting.
I would make sure all the firmware for rack server including adapter meet minimum requirements, the adapter is in the correct PCI-E slot that allows single wire mgmt, etc.
Thanks,
Kirk...
06-14-2018 11:52 AM
The fabric interconnect currently gives the following error for the port I have connected the new c240 to:
This is why I ended up at the global policy setting. I had initially connected the c240 to a 10GB port on the FI and it was recognized and brought in to the UCS manager. Now we are changing the port to a 40GB port and it is not connecting.
Thanks,
Scott
06-14-2018 11:56 AM
Can you confirm the cable optics you are using to connect the C240-M4 to the 40G port?
Are you using VIC 1227? It sounds like you might be using VIC 1385?
06-14-2018 12:27 PM
I am using the 1227 with the adapter to go from 10GB on the the FI to the 40GB connections.
06-14-2018 12:15 PM - edited 06-14-2018 12:18 PM
Did you decom the rack server, prior to switching out the cabling from 10 gb to the 40Gb adapter?
You may need to go into CIMC/F8 option on console/crash cart, and reset (reset to factory defaults) the CIMC, so discovery can be completely started fresh via the different 40Gb VIC card.
thanks,
Kirk...
06-14-2018 12:28 PM
I did not decommission the server before changing the connections. I will try that and I will reset the CIMC.
Thanks,
Scott
06-15-2018 07:45 AM
I have decommissioned the server and reset the CIMC to factory; however, I am still getting an error on the port in the FI. The error I am getting is:
06-15-2018 07:56 AM
Have you tried to change the speed from 40G to 10G on the port?
06-15-2018 08:18 AM
No I have not but, I wanted to take advantage of the 40GB speed. I'm planning on pushing a few hba's and nic's through this connection. I will try it and see what happens.
06-15-2018 08:59 AM
your previous post "I am using the 1227 with the adapter to go from 10GB on the the FI to the 40GB connections"
The 1227 is a 10GB, so it would need to negotiate with a 10GB link on the 6300. This would require a QSA and the port speed to be 10G on the FI.
06-15-2018 09:16 AM
To change the speed it would need to be configured as a breakout port correct? I have a QSA connected to the 40GB port.
06-15-2018 09:24 AM
Can you send the following outputs from the problem interface:
show run int x/y
show int x/y
show int x/y trans detail
06-15-2018 09:44 AM
06-15-2018 09:45 AM
When you SSH, just "connect nxos" and then run them
UCS#connect nxos
UCS(nxos)#show int ....
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide