08-01-2018 11:11 PM
Hi,
We are having performance issues between 2 blades in different Chassis, but both are in same vlan.
Please let me know, whether communication will happen with Fabric Inter connect itself or will the traffic need to switch through Nexus 7k core/Nexus5k. Also, suggest, how we can analyse the performance issue.
Please provide expert input.
08-02-2018 05:06 AM
Greetings.
To determine if your two blade's traffic are being locally switched at a single FI, you'll need to evaluate the vnic & mac pinning. If this is guestVm traffic, then check the mac learning table on the FIs to confirm where the two end point mac addresses are learned.
If they are both in same vlan & subnet, and pinned to same FI, then the traffic will not need to be switched up to the N5Ks. If each end point is pinned to a different FI then the traffic does get sent to the FI uplinks and switch at the N5ks.
You'll also want to check to make sure you don't have any CRC type errors incrementing on your physical links.
There are other things to check such as adapter policies (buffers, ring size, RSS, etc) as well as VNIC drivers matching HCL, jumbo frame configuration.
You may want to open a TAC case.
Thanks,
Kirk...
08-02-2018 11:45 PM
Thanks for the expert inputs. Yes i took the help of VCE support.
The traffic was made to switch through Fabric Interconnect A. and we monitored the interface level packet drops,counter error and CRC. There was no issues found.
Finally, the Server team identified that the Antivirus on the Servers was causing delay. After stopping the AV service, the issue got fixed.
Thanks a ton again for the inputs.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide