02-23-2022 06:47 AM
I have 2 4500 switches connected in VSS and I have on both a 10g SFP module with two servers connected to them in a 2 port port-channel , with each port connected to a switch. I'm suffering from low data transfer speed between one server that takes backup of vm and another server that host vms. server team suggested if I change the interface mtu from 1500 to 9000 but would that cause major connectivity issue? or even solve the issue?
02-23-2022 07:02 AM
and after that are the outlet for this traffic support 9000 or not? If you change the mtu and outlet is 1500 then the router cpu usage is utilize with fragment the packet.
Check this point
02-23-2022 07:08 AM
The key concept to keep in mind is that all the network devices along the communication path must support jumbo frames. Jumbo frames need to be configured to work on the ingress and egress interface of each device along the end-to-end transmission path. Furthermore, all devices in the topology must also agree on the maximum jumbo frame size. If there are devices along the transmission path that have varying frame sizes, then you can end up with fragmentation problems. Also, if a device along the path does not support jumbo frames and it receives one, it will drop it.
https://www.google.com/amp/s/www.networkworld.com/article/2224654/mtu-size-issues.amp.html
So the right for you open TAC case to cisco ask if sw hardware can support fragment if it need to.
02-23-2022 07:03 AM
Low Data Rate - this required to be defined, where is the soure coming from ?
MTU increased to Jumbo frame will have advantage, but before that you need to understand the problem here correctly.
There is no harm increate MTU and testing,
02-23-2022 10:16 AM
the source is a vm hosted on a server connected with a 10g interface connected to the core switch and the destination is a backup server connected on the same core switch with 2 10g interfaces port-channel
02-23-2022 04:08 PM
how are you doing this test?
backup server device? NetApp ?
have you connected one of the Laptops or PC to the same switch and tested the transfer on both sides?
02-24-2022 01:32 AM
the backup server called "veritas backup exec" , how can i try from pc if I'm using port-channels from both the vm host server and the backup server? also my pc interface is only 1gb and not sfp.
02-24-2022 01:43 AM
If you like to solve the issue, you need to have the right tools in place to test.
we are not sure what is Low rate of definition is, if you connect the PC to switch in the same VLAN with GLC-T (ethernet) you can at least test speed by creating a testing environment.
Other suggestions that ring my bell is :
1. shut down one of the Links in the port channel and check is there any improvement.
2. bring back and test the same way another link.
3. also consider load-balance mechanism when you using LACP
02-24-2022 01:58 AM
What i mean by low data rate is like 70-100 mbps , actually sometimes it works fine and jumps to 14 gbps. I need to clear that the issue isnt from the switch side. I already checked both the interfaces and they are all running at 10gb full duplex with no errors
1. shut down one of the Links in the port channel and check is there any improvement. ### tried but didnt fix it
2. bring back and test the same way another link. ### tried but didnt fix it
3. also consider load-balance mechanism when you using LACP ### which load balance mechanism should I choose as a standard? also if i tried with one interface working in the port-channel and it didnt solve the issue , doesn't that rule out the problem from load-balance mechanism?
02-24-2022 02:02 AM - edited 02-24-2022 02:03 AM
Also i would like to see the LACP Config on the switch, can you post that
show run interface port-cha X
part of the port-channel interface config :
show interface x/x
show run interface x/x
when speed load, what is the CPU Load ?
Do you see timing which get good speeds ?
02-23-2022 08:46 AM
Well, ideally jumping to a Ethernet 9K MTU, from the standard maximum of 1.5K, bumps payload transfer efficiency, up 2 or 3%. Depending on the platform, if can reduce the "workload" on frame/packet processing by a factor of about 6, whose impact (for improvement) varies. (I would expect it to be of most benefit for "faster" interfaces, where platform is "working harder" to maintain link speed. I.e. I wouldn't count on a 6x improvement.)
As the other posters have already noted (@MHM Cisco World and @balaji.bandi), there are some "gotchas" to using jumbo Ethernet, one I don't believe mentioned, is, as jumbo Ethernet is non-standard, jumbo MTU might vary by device (creating its own set of issues).
You didn't mention how slow is the "low data transfer speed". Plus, you do realize, that Etherchannel a) will only run one flow one one link, i.e. a maximum flow rates is one link's bandwidth, b) Etherchannel doesn't dynamically load balance, i..e one link can be saturated while other is passing no traffic, and c) non-optimal hashing algorithm choice can very much limit usage of Etherchannel's additional bandwidth (for multiple flows). Also, as you mention VSS (if it works like Catalyst 6Ks did), there are extra "gotchas" when using VSS, as VSS "breaks" some Etherchannel rules to avoid passing traffic between VSS pair.
I would recommend a bit more investigation into what's happening now, even beyond "network" devices. For example, your hosts are on different networks, hosts will use a MTU of 576 unless PMTDU enabled. Or, for example, if flow is TCP, TCP requires receiving host's RWIN to support BDP (bandwidth delay product). The latter is often not an issue on LAN, but at 10g it might be.
02-23-2022 10:17 AM
so you mean if i have 2 10gb port-channel that doesn't mean I have 20gb speed? the port-channel use is only redundancy?
02-23-2022 02:21 PM - edited 02-23-2022 03:08 PM
"so you mean if i have 2 10gb port-channel that doesn't mean I have 20gb speed? the port-channel use is only redundancy?"
Yes and no.
Again, (at least on Cisco) a single flow can always only use 1 link. So, dual 10g, maximum throughput for a flow 10g.
Even with just two flows, they both might "hash" to the same link. (50/50 odds.)
With "optimal" hashing (load balancing) algorithm choice/selection, often average for dual Etherchannel would be about 15g for dual 10g (20 "seen" for logical port).
02-23-2022 02:38 PM
BW is increase but the speed is same,
for example we have two frame need to switching, if single link then the frame1 will send and frame2 will wait until the frame1 send
if ether-channel is use the frame1 send and also frame 2 send within same speed but two frame send.
02-23-2022 11:11 AM
Hello,
do you have load balancing enabled for your port channel ? Try different load balancing algorithms...
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide