cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1324
Views
0
Helpful
3
Replies

Fabric Interconnect Port Channels

Hello All, 

We have two Cisco 6296UP connected to two chassis and uplinked to two Nexus Nexus 5000 switches. This was initial setup by a vendor before my time so I am trying to make sense of some of the chooses they have made. 

My first question is about the uplink to the two Nexus. The Nexus are setup to be vPC peers, yet there is no port channel between the Nexus and the fabric interconnects. Two twinax cables go from fabric A to Nexus one and another two twinax from fabric B to Nexus 2. Is there a reason for this at all and are all links being utilized properly? Logically to me fabric A should have a port channel configured and have one twinax cable going to Nexus one and the other to Nexus 2 and then the same configuration on fabric B. 

My other question is regarding port channels to the chassis. I have seen some documentation that makes it look like you can do a port channel to thew chassis. We currently have 4 twinax cables from each interconnect to each IOM on each chassis. Should these be configured as a portcahnnel or is that not necessary? All of the blades run VMware ESXi 6, I just want to make sure all of the links are being utilized properly. 

Any help is appreciated! 

1 Accepted Solution

Accepted Solutions

You should be able to re-ack the IOMs 1 at a time after adjusting the grouping/discovery policy, assuming you are running 2.24b or later.

There shouldn't be anything you need to change on the FI side of the PC/VPC, as you are just moving cables, that the VPC config on the upstream nexus switches should accommodate.

Thanks,

Kirk...

View solution in original post

3 Replies 3

Kirk J
Cisco Employee
Cisco Employee

Greetings.

Generally you would expect each FI to have a port channel (1 link to nexus a, 1 link to nexus b) accommodated by the VPC on the upstream nexus pair.  Unless this is some sort of Unified port pair carrying FCOE, then it may have just been an oversight on the setup.

You set the IOM to FI port channels in the global config policy for chassis discovery connectivity policy, and set link grouping to enabled.  The IOM to FI links are transparent to the hosts, but you do want to have the grouping enabled as individual link outages have less of an impact when grouping/port channels are enabled between IOM and FIs.

Thanks,

Kirk...

We do not use any Fiber Channel. So what I am hearing is I should absolutely setup the VPC from the interconnects tot he Nexus and that I should absolutely configure the port channels to the chassis? 

Is there anyway to change the grouping after the chassis has been discovered or will I need to make the changes to the policy and then rediscover the chassis? Or can I simply just simply acknowledge the IOM modules one at a time to prevent downtime?

Thank for for the reply! 

You should be able to re-ack the IOMs 1 at a time after adjusting the grouping/discovery policy, assuming you are running 2.24b or later.

There shouldn't be anything you need to change on the FI side of the PC/VPC, as you are just moving cables, that the VPC config on the upstream nexus switches should accommodate.

Thanks,

Kirk...

Review Cisco Networking for a $25 gift card