04-08-2019 12:25 PM
Im using iperf to measure transfer rate between 2 blades on the same chassis and 2 blades on a different chassis.
* Legend
B1C1 - blade1 chassis1
B2C1 - blade2 chassis1
B3C2 - blade3 chassis2
B4C2 - blade3 chassis2
ran the server mode in B4C2 ( iperf -s )
ran iperf client on B1C1 ( iperf -c B4C2 ) downstream traffic yields to 10G ** good
ran iperf client on B1C1 ( iperf -c B4C2 -r ) downstream traffic and after that upstream yields to 10G ** good
ran iperf client on B1C1 ( iperf -c B4C2 -d ) simultaneous downstream traffic and after that upstream yields to 10G ** good
ran iperf client on B3C2 ( iperf -c B4C2 ) downstream traffic yields to 10G ** good
ran iperf client on B3C2 ( iperf -c B4C2 -r ) downstream traffic and after that upstream yields to 10G ** good
ran iperf client on B3C2 ( iperf -c B4C2 -d ) simultaneous downstream traffic and after that upstream yields to 5G ** ????
tested using B2C1 as server and B3C2 and B4C2 as clients gives the same result..
all blades uses the first vNIC as means of communicaton..
does anyone had this issue before?
04-09-2019 03:45 AM - edited 04-09-2019 03:51 AM
Greetings.
What does your Chassis IOM to FI connectivity look like?
How many links, what speed, what IOM model and FI model?
Thanks,
Kirk...
04-09-2019 08:46 AM
04-09-2019 11:18 AM
Does two ports in each of the FI mean 1 port each IOM for total of two, or two ports per IOM each FI for total of 4?
Thanks,
Kirk...
04-10-2019 04:53 AM - edited 04-10-2019 04:56 AM
Where I'm going with this, is if you have 1 cable per IOM to FI on these chassis,,,then you will have 2 blades in same chassis sharing the same IOM to FI link while trying to negotiate the iper test (assuming your baremetal OS or guestVM connectivity is likely pinned to the same side). If the blades are in different chassis, then this bottle neck would not exist. The IOMs do not perform switching, so blade to blade connections go up to the FI switchports, and come back down for blades in same chassis. If you have 2 links per IOM to FI,,, then there are still situations where the blades can share the same IOM link.
Please log into your individual FI A and B nodes, and run the following while your problem iperf test is ongoing:
#connect IOM 13 (or what ever chassis # has the SAME two blades you are running iPER test between that has low output)
#show platform software woodside sts
#show platform software woodside rate
Please take a look at this good doc section that covers how the blades are connected to the IOMs: https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/ts/guide/UCSTroubleshooting/UCSTroubleshooting_chapter_01001.html#reference_BDFF3C42F82743899CCEAB51241191DB
Kirk...
04-10-2019 02:08 PM
Each IOM has 2 physical link to each FI..
IOM 1 --- Line 1 -- Port25 FI_A
IOM 1 --- Line 2 -- Port26 FI_A
IOM 2 --- Line 1 -- Port25 FI_B
IOM 2 --- Line 2 -- Port26 FI_B
Currrently, each blade ( bare metal config - no VM ) has 4 vNIC..
vNIC1 -- FI_A
vNIC2 -- FI_B
vNIC3 -- FI_A
vNIC4 -- FI_B
on the server side (OS Side):
eth0 - vNIC1 ( bond 0) -- active interface
eth1 - vNIC2 ( bond 0)
eth2 - vNIC3 ( bond 1) -- active interface
eth3 - vNIC4 ( bond 1)
when a blade on the same chassis communicates to each other they use eth0 -- vNIC1 ( which goes to FI_A)..
a test was down wherein .. on the blade 1 we disabled eth0 --vNIC1 and on blade 2 we disable eth1
forcing blade 1 to use FI_B and and blade 2 to use FI_A.. we get the result we wanted.. 10G..
now why is that? is this how cisco built the underlying networking in UCS?
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide