02-22-2024 09:32 AM
Hi
Newbi here so please be nice.
I'm using a Cisco 3132 switch. I have 2 40 gb connections but only getting 1 gb transfer rates
According to linux networking app it shows 40000 mb/s connection.
According to Cisco 3132 it also shows the 2 ports at 40 gb/s
But when benchmarking using iperf or transferring large files (100 gb in size) I only see about a 1 gb/s transfer rate
Any suggestions?
02-22-2024 09:52 AM
is the end device connected with 40GB connection or interface has 40GB ?
Does the nexus switch enabled jumbo frames ? 9K?
can you post show run interface eth x/x and show interface eth x/x output.
02-22-2024 10:14 AM
Ethernet1/9 is up
admin state is up, Dedicated Interface
Hardware: 40000 Ethernet, address: c4b2.3980.57a8 (bia c4b2.3980.57a8)
MTU 1500 bytes, BW 40000000 Kbit , DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, medium is broadcast
Port mode is access
full-duplex, 40 Gb/s, media type is 40G
Beacon is turned off
Auto-Negotiation is turned on FEC mode is Auto
Input flow-control is off, output flow-control is off
Auto-mdix is turned off
Rate mode is dedicated
Switchport monitor is off
EtherType is 0x8100
EEE (efficient-ethernet) : n/a
admin fec state is auto, oper fec state is off
Last link flapped 10:05:25
Last clearing of "show interface" counters never
1 interface resets
Load-Interval #1: 30 seconds
30 seconds input rate 246496 bits/sec, 88 packets/sec
30 seconds output rate 118352 bits/sec, 83 packets/sec
input rate 246.50 Kbps, 88 pps; output rate 118.35 Kbps, 83 pps
Load-Interval #2: 5 minute (300 seconds)
300 seconds input rate 241944 bits/sec, 49 packets/sec
300 seconds output rate 157968 bits/sec, 43 packets/sec
input rate 241.94 Kbps, 49 pps; output rate 157.97 Kbps, 43 pps
RX
5331757 unicast packets 4216 multicast packets 208 broadcast packets
5336181 input packets 1206872288 bytes
0 jumbo packets 0 storm suppression packets
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input discard
0 Rx pause
TX
6360244 unicast packets 92927 multicast packets 91358 broadcast packets
6544529 output packets 5377000058 bytes
0 jumbo packets
0 output error 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble 0 output discard
0 Tx pause
Ethernet1/11 is up
admin state is up, Dedicated Interface
Hardware: 40000 Ethernet, address: c4b2.3980.57b0 (bia c4b2.3980.57b0)
MTU 1500 bytes, BW 40000000 Kbit , DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, medium is broadcast
Port mode is access
full-duplex, 40 Gb/s, media type is 40G
Beacon is turned off
Auto-Negotiation is turned on FEC mode is Auto
Input flow-control is off, output flow-control is off
Auto-mdix is turned off
Rate mode is dedicated
Switchport monitor is off
EtherType is 0x8100
EEE (efficient-ethernet) : n/a
admin fec state is auto, oper fec state is off
Last link flapped 09:50:45
Last clearing of "show interface" counters never
3 interface resets
Load-Interval #1: 30 seconds
30 seconds input rate 600 bits/sec, 0 packets/sec
30 seconds output rate 7872 bits/sec, 4 packets/sec
input rate 600 bps, 0 pps; output rate 7.87 Kbps, 4 pps
Load-Interval #2: 5 minute (300 seconds)
300 seconds input rate 608 bits/sec, 0 packets/sec
300 seconds output rate 7320 bits/sec, 3 packets/sec
input rate 608 bps, 0 pps; output rate 7.32 Kbps, 3 pps
RX
3024363 unicast packets 3138 multicast packets 142 broadcast packets
3027643 input packets 4263815345 bytes
0 jumbo packets 0 storm suppression packets
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input dis
On linux ...
todd@Z840:~$ ethtool ens5d1
Settings for ens5d1:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseKX4/Full
40000baseCR4/Full
40000baseSR4/Full
56000baseCR4/Full
56000baseSR4/Full
1000baseX/Full
10000baseCR/Full
10000baseSR/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10000baseKX4/Full
40000baseCR4/Full
40000baseSR4/Full
56000baseCR4/Full
56000baseSR4/Full
1000baseX/Full
10000baseCR/Full
10000baseSR/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 40000Mb/s
Duplex: Full
Auto-negotiation: off
Port: FIBRE
PHYAD: 0
Transceiver: internal
netlink error: Operation not permitted
Current message level: 0x00000014 (20)
link ifdown
Link detected: yes
todd@Z840:~$
02-22-2024 02:12 PM
i dd some test with 10G link using iperf3
i could able to get cumulative 9.8GB
02-22-2024 10:30 AM
For TCP, is receiving host's RWIN sized for BDP?
For actual data files, both hosts' I/O subsystems able to actually support 40 Gbps?
Both hosts in same L2 domain? If not, PMTUD enabled?
02-22-2024 01:17 PM
02-23-2024 01:48 AM
"TCP window size: 85.0 KByte (default)"
Don't know your BDP, so unable to say if that's sufficient.
Later reply notes you maxed out send and receive windows, so RWIN 2 GB?
Likely, I would think the issue isn't with the 3132, but finding the problem can, itself, be a problem. There are many potential bottlenecks, like are both hosts providing the PCI lanes actually needed to support 40 Gbps, etc.
Unfortunately, you're, I believe, out on the fringe. Many decades ago, I had a remote host with horrible performance which I suspected was due to non-optimal NIC settings. Got a new NIC driver, dropped day before, installed it, performance issues disappeared. Nothing in driver's release notes described any change that would appear to account for improvement. (Although rare, have seen similar similar cases of new software releases making huge improvements without documentation of why.)
The forgoing is an indirect hint to try different hardware drivers, if available. Usually newer is better, but sometimes older is better.
You also may need to consider trying a different brand of 40 Gbps NICs, and other components of the end-to-end environment. I.e., everything is a possible cause, if you've bumped into a software and/or hardware defect.
02-23-2024 11:42 PM
02-24-2024 04:58 AM
For Windows, it's bit out of date, but you might find this https://www.speedguide.net/articles/windows-8-10-2012-server-tcpip-tweaks-5077 information helpful.
02-22-2024 08:15 PM
Maxing out recv and send window sizes had no effect. No matter what settings I use I'm stuck at 1.07 gb.
Interestingly enough when I first got the switch and connected the fibre optics as a second data link I was able to copy files to/from either machine at rates of about 7gb. at this time I don't recall the settings I used but the ip address's were 10.0.0.2 and .3 the internet gateway was connected via the on board 10/100/1000 ethernet port. drives were mounted as nfs shares.
I'll try iperf v3 but don't think I'll have any better luck. Currently copying a 100 gb file from a to b still taking too long
02-23-2024 11:43 AM
iperf3 gave me a 3 gb score. still a farcry from the 40 gb nic card rating. I suppose I can try booting the win 10 os and see if the situation changes any. from my point of view there is a few areas to validate:
os - Linux / Win 10 Network settings
Nic: Mellanox Connect X3 Card Defective? and/or drivers Win/Linux
Fibre Optic Adapter: legrand qsfp40g-aoc15m-leg
Fibre Optic Adapter: 10gtek aoc-q1q1-005 40g qsfp+ aoc om3 5m
All of the hardware was purchased used off ebay. Not sure how to test the individual components
02-24-2024 11:51 AM
iperf3 might have shed light on the issue. I noticed a column labeled retr. Not knowing what it meant I did a lookup on it and it a retry counter and I am seeing a large number of these. Haven't found any posts yet on what causes this nor how to fix
02-24-2024 02:42 PM
@Toddb wrote:
iperf3 might have shed light on the issue. I noticed a column labeled retr. Not knowing what it meant I did a lookup on it and it a retry counter and I am seeing a large number of these. Haven't found any posts yet on what causes this nor how to fix
Lost packets? If so, too many will tank performance.
02-25-2024 09:46 AM
on the z840 machine i'm running iperf -s
on the z640 i'm running iperf3 -c 192.168.12.2 -P4 -t 30
a summary reports this:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 669 MBytes 187 Mbits/sec 295 sender
[ 5] 0.00-30.08 sec 668 MBytes 186 Mbits/sec receiver
[ 7] 0.00-30.00 sec 700 MBytes 196 Mbits/sec 421 sender
[ 7] 0.00-30.08 sec 699 MBytes 195 Mbits/sec receiver
[ 9] 0.00-30.00 sec 897 MBytes 251 Mbits/sec 570 sender
[ 9] 0.00-30.08 sec 897 MBytes 250 Mbits/sec receiver
[ 11] 0.00-30.00 sec 1010 MBytes 283 Mbits/sec 193 sender
[ 11] 0.00-30.08 sec 1010 MBytes 282 Mbits/sec receiver
[SUM] 0.00-30.00 sec 3.20 GBytes 916 Mbits/sec 1479 sender
[SUM] 0.00-30.08 sec 3.20 GBytes 913 Mbits/sec receiver
02-25-2024 09:50 AM
Any ideas on how to test each connection? does the cisco 3132 have any built in diagnostics?
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide