06-14-2025 09:55 PM
I have to trunk connecting two switches together in an EtherChannel as indicated in the diagram above. All 4 dishes have static IP's in my native vlan 1. When the links are up and in trunk mode I can usually only ping 1 dish? I have a Windows PC in vlan 1 on IDF6 and my dishes IP's end in 210, 211, 212, and 213. When the links are up and trunking I can only ping .213 but if I disable int 1/43 I can see 1/2 the dishes and if I disable 1/44 on IDF6 I can see the other 1/2 of the dishes (but not both at the same time since the link to IDF7 would be down). I have verified that my native vlan is 1 and the dishes IP's are in vlan 1. I should be able to see all 4 IP's when 1/43 and 1/44 are trunks but I cant. I understand that in trunk mode cisco supports and accepts vlan 1 both tagged and untagged at the same time, how can I force all vlan 1 traffic to be untagged? By all accounts the documentation says that the native vlan 1 should be transmitted as untagged even in trunk mode but it doesn't appear to be the case? IDF6 is a cisco 4948 running 15.0(2) and IDF7 is running 4948E with 15.2(4)E10 but I have the same issue between two 4948E's running the latest 15.2(4)E10 as well.
Solved! Go to Solution.
06-15-2025 01:05 AM
Hello @dajohnso,
this looks to me like normal behavior based on the nature of etherchannels.
The default load-balancing scheme on most platforms is 'src-mac' unless you changed it to another scheme.
So let's look what is going on based on the 'src-mac' default scheme.
As soon as your PC sends any traffic over the etherchannel, IDF6 will choose one link for all traffic sourced from this PC. Let's say IDF6 chooses int 1/43 for traffic from your PC. This simply means that you will not be able to ping the dishes of the other link from this PC.
The same mechanism kicks in for the return traffic. IDF7 will also choose just a single link for all traffic sourced from your PC. So the return traffic might use the same or the other link. If you wait some time, then the switch might choose the other link leading to inconsistent ping results.
So you need to ping from at least 2 different stations to have a chance to reach the dishes on both links.
You can change the load-balancing algorithm and your switches should support various combinations of src/dst mac/ip/port depending on your needs. Nevertheless, for each combination the switch will always choose a single link and you still need more than one station to reach all dishes.
HTH!
06-14-2025 11:10 PM
Hello @dajohnso
Note that if vlan dot1q tag native command is enabled, VLAN 1 will be tagged... So, disable it with no vlan dot1q tag native command.
06-15-2025 08:40 AM
I suspected the same and I had already tried this and didn't see a difference.
06-15-2025 01:05 AM
Hello @dajohnso,
this looks to me like normal behavior based on the nature of etherchannels.
The default load-balancing scheme on most platforms is 'src-mac' unless you changed it to another scheme.
So let's look what is going on based on the 'src-mac' default scheme.
As soon as your PC sends any traffic over the etherchannel, IDF6 will choose one link for all traffic sourced from this PC. Let's say IDF6 chooses int 1/43 for traffic from your PC. This simply means that you will not be able to ping the dishes of the other link from this PC.
The same mechanism kicks in for the return traffic. IDF7 will also choose just a single link for all traffic sourced from your PC. So the return traffic might use the same or the other link. If you wait some time, then the switch might choose the other link leading to inconsistent ping results.
So you need to ping from at least 2 different stations to have a chance to reach the dishes on both links.
You can change the load-balancing algorithm and your switches should support various combinations of src/dst mac/ip/port depending on your needs. Nevertheless, for each combination the switch will always choose a single link and you still need more than one station to reach all dishes.
HTH!
06-15-2025 06:28 AM - edited 06-15-2025 07:20 AM
The default load-balancing scheme on most platforms is 'src-mac' . . .
BTW, certainly true historically, and may be true for 4848s, but I believe later switches use better defaults. Regardless, usually it's worthwhile insuring you're using platform's best option (also varies between platforms) to LB your traffic.
Also, BTW, @Jens Albrecht and I are attributing the issue to the same cause, just how we describe it, is different.
Also, take note to what Jens correctly describes with src only LB one source may fail, and you would need to try additional sources, but he doesn't, correctly, guarantee that will work either.
The only "flaw" (of omission) to what Jens described, I believe there's a corner case that having multiple source stations precludes ever working, i.e. using dst only for LB choice.
(EDIT: Rereading the above two paragraphs, what might be unclear, with src only LB, for a ping to reach dish is hit or miss, but for dst LB it's all or none. However, dish's ping reply is another potential issue. It's a mess because. . .)
The underlying issue is using a logical in-band management addressing scheme, for your dish devices, using Etherchannel.
06-15-2025 08:46 AM - edited 06-15-2025 08:50 AM
Yes this makes a lot of sense! I had also wondered why when I did a speed test from my laptop I was only getting 1/2 the speed I expected as if all the traffic was only going over one dish. This answers both questions. I a perfect world I would want the two links to load balance all traffic based on some thing like the size of the queue of traffic pending transfer on each link?
This also explains why the other dishes show up when I turn one link off. The other link becomes the only path and after a few seconds the dishes on the other link that still up show up in ping.
06-15-2025 05:36 AM
I wondering whether the issue is due to assuming Etherchannel provides a single L2 broadcast domain across the member links, themselves. Etherchannel, of course, extends a L2 broadcast domain between the two IDFs, but your dish devices are not on "normal" IDF ports.
Let me put it another way. If IDF6 ARPs for an IP, would you expect the ARP to physically be transmitted across both member links to IDF7? When IDF7 receives the ARP on one member link, would you expect it to transmit it back to IDF6 on the other member link?
Do you see the possible issue? Do you also see why it shouldn't be a problem except when there's more than one active member link?
06-15-2025 08:57 AM - edited 06-15-2025 09:29 AM
yes I see what could be a problem and why it works that way. I just assumed the "int po2" would handle all of that and manage the two links as one group and would send traffic over the lower utilized interface. I really wish there was a way to weight the links a s well so they didn't have to be the same speed. I wanted dual links for redundancy and increased bandwidth for each device in IDF7. I am using Ubiquiti NBE-5AC with max speed of 450Mbps and I thought I would get close to 900Mbps with a pair so maybe I need to upgrade to 1G dishes. Where do I make the changes for LB options, in the PO2 or each int 43 and 44?
I can see now how, if the "po2" managed all the traffic on the links evenly I would never be able to talk to either dish directly as the traffic would pass one link or the other randomly. If I install new dishes with a management interface and run a new cable from vlan 1 that would allow me to access the radio's. For now I will have to accept shutting down one link for access. (sorry for all the edits, it wouldnt let me post with the wording I chose before???)
06-15-2025 09:55 AM
Where do I make the changes for LB options, in the PO2 or each int 43 and 44?
It's usually (always?) a global option.
See https://www.cisco.com/c/dam/en/us/td/docs/switches/lan/catalyst9600/software/release/16-12/configuration_guide/lyr2/configuring_etherchannels.html#configure-etherchannel-load-balancing but your switches probably have fewer LB options.
06-15-2025 10:01 AM - edited 06-15-2025 10:21 AM
I wanted dual links for redundancy and increased bandwidth for each device in IDF7.
For dual Etherchannel links, you normally average about 50% more aggregate bandwidth, but any one flow is limited to using one link.
If links were L3, you have more options. However there are other (often 3rd party) L2 options to better utilize multiple links.
06-15-2025 10:25 AM
Changing the LB behavior is a global command and the available options depend on the platform capabilities.
On a 3560X switch running IOS 15.2(7)E11 the default is 'src-mac' as usual for IOS platforms with the following options:
SW_VMs(config)#port-channel load-balance ?
dst-ip Dst IP Addr
dst-mac Dst Mac Addr
src-dst-ip Src XOR Dst IP Addr
src-dst-mac Src XOR Dst Mac Addr
src-ip Src IP Addr
src-mac Src Mac Addr
On your 4948E switches you should also have the option to set src-port, dst-port and src-dst-port.
In most cases one of the Src-XOR-Dst options gives the best traffic distribution over both links but the limitation that one flow uses one link only always exists in case of L2 etherchannels.
06-15-2025 01:02 PM
In most cases one of the Src-XOR-Dst options gives the best traffic distribution . . .
Concur.
. . . the limitation that one flow uses one link only always exists in case of L2 etherchannels.
BTW, also applies to L3. That said, if using L3 while using MACs, the MACs would usually be the L3 interfaces' MACs to the logical link, i.e. unchanging so always using a single member link.
On some platforms LB may have additional options like also using TCP/UDP port numbers, which can help distribute flows between a pair of hosts.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide