cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1799
Views
0
Helpful
19
Replies

1 gb data rates across a 40 gb connection WHY?

Toddb
Level 1
Level 1

Hi

Newbi here so please be nice.

I'm using a Cisco 3132 switch.  I have 2 40 gb connections but only getting 1 gb transfer rates

According to linux networking app it shows 40000 mb/s  connection.

According to Cisco 3132 it also shows the 2 ports at 40 gb/s

But when benchmarking using iperf or transferring large files (100 gb in size)   I only see about a 1 gb/s transfer rate

Any suggestions?

19 Replies 19

balaji.bandi
Hall of Fame
Hall of Fame

is the end device connected with 40GB connection or interface has 40GB ?

Does the nexus switch enabled jumbo frames ? 9K?

can you post show run interface eth x/x and  show interface eth x/x output.

 

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

Ethernet1/9 is up
admin state is up, Dedicated Interface
Hardware: 40000 Ethernet, address: c4b2.3980.57a8 (bia c4b2.3980.57a8)
MTU 1500 bytes, BW 40000000 Kbit , DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, medium is broadcast
Port mode is access
full-duplex, 40 Gb/s, media type is 40G
Beacon is turned off
Auto-Negotiation is turned on FEC mode is Auto
Input flow-control is off, output flow-control is off
Auto-mdix is turned off
Rate mode is dedicated
Switchport monitor is off
EtherType is 0x8100
EEE (efficient-ethernet) : n/a
admin fec state is auto, oper fec state is off
Last link flapped 10:05:25
Last clearing of "show interface" counters never
1 interface resets
Load-Interval #1: 30 seconds
30 seconds input rate 246496 bits/sec, 88 packets/sec
30 seconds output rate 118352 bits/sec, 83 packets/sec
input rate 246.50 Kbps, 88 pps; output rate 118.35 Kbps, 83 pps
Load-Interval #2: 5 minute (300 seconds)
300 seconds input rate 241944 bits/sec, 49 packets/sec
300 seconds output rate 157968 bits/sec, 43 packets/sec
input rate 241.94 Kbps, 49 pps; output rate 157.97 Kbps, 43 pps
RX
5331757 unicast packets 4216 multicast packets 208 broadcast packets
5336181 input packets 1206872288 bytes
0 jumbo packets 0 storm suppression packets
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input discard
0 Rx pause
TX
6360244 unicast packets 92927 multicast packets 91358 broadcast packets
6544529 output packets 5377000058 bytes
0 jumbo packets
0 output error 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble 0 output discard
0 Tx pause

Ethernet1/11 is up
admin state is up, Dedicated Interface
Hardware: 40000 Ethernet, address: c4b2.3980.57b0 (bia c4b2.3980.57b0)
MTU 1500 bytes, BW 40000000 Kbit , DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, medium is broadcast
Port mode is access
full-duplex, 40 Gb/s, media type is 40G
Beacon is turned off
Auto-Negotiation is turned on FEC mode is Auto
Input flow-control is off, output flow-control is off
Auto-mdix is turned off
Rate mode is dedicated
Switchport monitor is off
EtherType is 0x8100
EEE (efficient-ethernet) : n/a
admin fec state is auto, oper fec state is off
Last link flapped 09:50:45
Last clearing of "show interface" counters never
3 interface resets
Load-Interval #1: 30 seconds
30 seconds input rate 600 bits/sec, 0 packets/sec
30 seconds output rate 7872 bits/sec, 4 packets/sec
input rate 600 bps, 0 pps; output rate 7.87 Kbps, 4 pps
Load-Interval #2: 5 minute (300 seconds)
300 seconds input rate 608 bits/sec, 0 packets/sec
300 seconds output rate 7320 bits/sec, 3 packets/sec
input rate 608 bps, 0 pps; output rate 7.32 Kbps, 3 pps
RX
3024363 unicast packets 3138 multicast packets 142 broadcast packets
3027643 input packets 4263815345 bytes
0 jumbo packets 0 storm suppression packets
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input dis

 

On linux ...

todd@Z840:~$ ethtool ens5d1
Settings for ens5d1:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseKX4/Full
40000baseCR4/Full
40000baseSR4/Full
56000baseCR4/Full
56000baseSR4/Full
1000baseX/Full
10000baseCR/Full
10000baseSR/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10000baseKX4/Full
40000baseCR4/Full
40000baseSR4/Full
56000baseCR4/Full
56000baseSR4/Full
1000baseX/Full
10000baseCR/Full
10000baseSR/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 40000Mb/s
Duplex: Full
Auto-negotiation: off
Port: FIBRE
PHYAD: 0
Transceiver: internal
netlink error: Operation not permitted
Current message level: 0x00000014 (20)
link ifdown
Link detected: yes
todd@Z840:~$

 

i dd some test with 10G link using iperf3

i could able to get cumulative 9.8GB

balajibandi_0-1708639908495.jpeg

 

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

Joseph W. Doherty
Hall of Fame
Hall of Fame

For TCP, is receiving host's RWIN sized for BDP?

For actual data files, both hosts' I/O subsystems able to actually support 40 Gbps?

Both hosts in same L2 domain?  If not, PMTUD enabled?

both machines are in same domain. a very simple setup  where i have

z640 wkst  <--> mellanox 3 pro <--> fibre <--> cisco 3132  <-->  fibre
<--> mellanox 3 pro <--> z840 wkst

I tried enabling jumbo frames with a mtu of 9216 and that made matters worse

after resetting machine nic cards back to mtu 1500 I'm getting the same
performance as before

todd@z640:~$ iperf -c 192.168.12.2 #-t 30 -P4
------------------------------------------------------------
Client connecting to 192.168.12.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] local 192.168.12.3 port 53866 connected with 192.168.12.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0151 sec  1.06 GBytes   913 Mbits/sec
todd@z640:~$

I'm still researching tcp window sizes.


"TCP window size: 85.0 KByte (default)"

Don't know your BDP, so unable to say if that's sufficient.

Later reply notes you maxed out send and receive windows, so RWIN 2 GB?

Likely, I would think the issue isn't with the 3132, but finding the problem can, itself, be a problem.  There are many potential bottlenecks, like are both hosts providing the PCI lanes actually needed to support 40 Gbps, etc.

Unfortunately, you're, I believe, out on the fringe.  Many decades ago, I had a remote host with horrible performance which I suspected was due to non-optimal NIC settings.  Got a new NIC driver, dropped day before, installed it, performance issues disappeared.  Nothing in driver's release notes described any change that would appear to account for improvement.  (Although rare, have seen similar similar cases of new software releases making huge improvements without documentation of why.)

The forgoing is an indirect hint to try different hardware drivers, if available.  Usually newer is better, but sometimes older is better.

You also may need to consider trying a different brand of 40 Gbps NICs, and other components of the end-to-end environment.  I.e., everything is a possible cause, if you've bumped into a software and/or hardware defect.

Yeah back in the early days of installing the switch, nic cards and
cabling but keeping the original 1gb connection I was seeing nsf shares
with transfer rates approaching the drive benchmarks. I'll try to
recreate that configuration.    The Nic cards (mellanox connect 3 pro) 
appear to no longer be supported. So I'm left with whatever drivers are
installed    Since I was able to achieve high rates previously I suspect
it's a server network setting issue some where.

Not only is this fringe territory for me I'm very much out of my element
in networking as a whole.

It used to be all plug in and play.  It took be days just to figure out
how to get the 3132 to talk to the port adapters.

But I'll soldier on.  I'll  try win 11 for grins


For Windows, it's bit out of date, but you might find this https://www.speedguide.net/articles/windows-8-10-2012-server-tcpip-tweaks-5077 information helpful.

Toddb
Level 1
Level 1

Maxing out recv and send window sizes had no effect.  No matter what settings I use I'm stuck at 1.07 gb.

Interestingly enough when I first got the switch and connected the fibre optics as a second data link I was able to copy files to/from either machine at rates of about 7gb.  at this time I don't recall the settings I used but the ip address's were 10.0.0.2 and .3 the internet gateway was connected via the on board 10/100/1000 ethernet port.  drives were mounted as nfs shares. 

 

I'll try iperf v3  but don't think I'll have any better luck.  Currently copying a 100 gb file from a to b still taking too long

Toddb
Level 1
Level 1

iperf3 gave me a 3 gb score. still a farcry from the 40 gb nic card rating.  I suppose I can try booting the win 10 os and see if the situation changes any.  from my point of view there is a few areas to validate:  

os - Linux / Win 10   Network settings

Nic:  Mellanox Connect X3  Card Defective?  and/or drivers Win/Linux

Fibre Optic Adapter:  legrand qsfp40g-aoc15m-leg

Fibre Optic Adapter: 10gtek aoc-q1q1-005 40g qsfp+ aoc om3 5m

All of the hardware was purchased used off ebay.  Not sure how to test the individual components

Toddb
Level 1
Level 1

iperf3 might have shed light on the issue.  I noticed a column labeled retr.  Not knowing what it meant I did a lookup on it and it a retry counter and I am seeing a large number of these.   Haven't found any posts yet on what causes this nor how to fix

 


@Toddb wrote:

iperf3 might have shed light on the issue.  I noticed a column labeled retr.  Not knowing what it meant I did a lookup on it and it a retry counter and I am seeing a large number of these.   Haven't found any posts yet on what causes this nor how to fix


Lost packets?  If so, too many will tank performance.

 

on the z840 machine  i'm running iperf -s

on the z640 i'm running iperf3 -c 192.168.12.2 -P4 -t 30
 

a summary reports this:

 

[ ID] Interval    Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 669 MBytes 187 Mbits/sec 295 sender
[ 5] 0.00-30.08 sec 668 MBytes 186 Mbits/sec receiver
[ 7] 0.00-30.00 sec 700 MBytes 196 Mbits/sec 421 sender
[ 7] 0.00-30.08 sec 699 MBytes 195 Mbits/sec receiver
[ 9] 0.00-30.00 sec 897 MBytes 251 Mbits/sec 570 sender
[ 9] 0.00-30.08 sec 897 MBytes 250 Mbits/sec receiver
[ 11] 0.00-30.00 sec 1010 MBytes 283 Mbits/sec 193 sender
[ 11] 0.00-30.08 sec 1010 MBytes 282 Mbits/sec receiver
[SUM] 0.00-30.00 sec 3.20 GBytes 916 Mbits/sec 1479 sender
[SUM] 0.00-30.08 sec 3.20 GBytes 913 Mbits/sec receiver

Toddb
Level 1
Level 1

Any ideas on how to test each connection?  does the cisco 3132 have any built in diagnostics?

Review Cisco Networking for a $25 gift card