08-26-2015 12:46 PM - edited 03-01-2019 12:20 PM
We have a Cisco UCS 5108 Chassis and two Cisco 6248 Fabric Interconnects. They connect to 4 Cisco 3172TQ switches. I have a diagram displaying our setup if needed.
I was testing network redundancy by disconnecting all network uplink connections on the FIs (one FI at a time). This worked perfectly well. However when I started re-enabling the network uplinks I lost 5 pings. Worse, depending on which one of the FIs I was testing redundancy, a different server would fail 5 pings when re-enabling.
I don't think our setup really is that complex but I can't figure it out.
Thanks for any help.
Chris
08-26-2015 05:27 PM
Could be a combination of a few things
1. Is spanning-tree portfast trunk configured on the links going to the FI?
2. If the links are not in a port-channel the vnics are probably being redistributed when the new links come up, you can check with the following command from the CLI:
'connect nxos'
'show pinning border-interfaces'
--------------------+---------+----------------------------------------
Border Interface Status SIFs
--------------------+---------+----------------------------------------
Eth1/20 Active veth961 veth965 veth973 veth977 veth981
veth985 veth989 veth993 veth997
veth1001 veth1005 veth1009 veth1013
veth1017 veth1021 veth1025 veth1029
veth1033 veth1037 veth1041 veth1045
veth1049 veth1053 veth1057 veth1061
veth1065 veth1069 veth1073 veth1081
veth1085
You'll probably see everything on Eth1/20 for example and then a few minutes later after Eth1/21 is enabled the veth interfaces will start to be split between the two when you re-run that show command.
'connect nxos'
'show pinning border-interfaces'
--------------------+---------+----------------------------------------
Border Interface Status SIFs
--------------------+---------+----------------------------------------
Eth1/20 Active veth961 veth965 veth973 veth977 veth981
veth985 veth989 veth993 veth997
veth1001 veth1005 veth1009 veth1013
veth1017 veth1021 veth1025 veth1029
Eth1/21 Active veth1033 veth1037 veth1041 veth1045
veth1049 veth1053 veth1057 veth1061
veth1065 veth1069 veth1073 veth1081
veth1085
08-26-2015 07:03 PM
Brian,
First of all, Thanks!
I'll get with our Network Admin first thing tomorrow so we can go over this.
However,
1 - I don't know. Net Admin will be able to verify this.
2 - There are no port-channels. Each FI has two uplink network connections. One link goes to Rack 1 3172 Primary, the other to 3172 Secondary. The other FI connects in the same manner but to 3172s in Rack 2. So for now, it's a one to one uplink connection to the 3172. (I hope this makes sense, I'm definitely not a network admin)
I'll post results of our discussion/tests.
Again, Thanks!
Chris
08-26-2015 10:43 PM
To power a FI includes many steps
- form a cluster between FI's
- all vnics on this fabric are down, and will be enabled
- OS: nic teaming might switch active path to this new fabric
- GARP processing
08-27-2015 05:19 AM
Walter,
I'll take a look into your notes today together with the netadmin.
- The FIs are already in a cluster (default setup for 2 FIs)
- The OSs were setup with one NIC only (ESXi mgmt has 2 NICs but 1 IP and ESXi takes care of that)
Thank you very much!
Chris
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide