06-11-2015 05:17 AM
We are using two SG500-52 switches in a stack, in L3 system mode and with the latest firmware 1.4.1.03.
The stack of switches is connected with 2-port LAGs to ESXi hosts. One port of the LAG is on Master switch, the other one is on Backup switch.
LAGs are configured as trunks, with port-channel mode "ON" and default load balance policy "src-dst-mac"
We created two ACLs on switch.
One of them, which we bounded to our DMZ VLAN (input direction), is behaving strangely against the traffic originating from ESXi virtual machines (VMs).
According to DMZ ACL rules, traffic coming from DMZ VMs to our default VLAN (10.1.0.x) should be blocked.
However, it happens that traffic is randomly forwarded to either odd or even destination IP addresses in default VLAN.
Example:
ping to 10.1.0.7 (last octet is odd number) => success
ping to 10.1.0.8 (last octet is even number)=> blocked
Now, we performed some further troubleshooting:
- We took a laptop and connected it directly to switch stack, to access port in DMZ VLAN. All traffic from laptop to default VLAN is blocked by ACL. That means that ACL rules are fine.
- When we pull out one cable from LAG and leave only Master switch in stack to handle traffic from ESX => traffic is blocked by ACL!
- However, leaving one LAG cable only on Backup switch in stack => traffic is not blocked at all, not even to odd or even IP address. All traffic passes from DMZ, it looks like ACL is not applied at all!
Has anyone else seen this happen with the SG series switches?
02-08-2016 02:31 AM
Hello!
I have the same problem.
Two SG500-24, stacked in Native Stacking, firmware 1.4.1.3.
If some LAG distributed between Master and Backup switches,
then traffic comming to Backup switch ports of this LAG
actually NOT filtering by IPv4 ACEs, binded to Vlans.
But single port connections (not LAGs) in Backup switch working correct with ACEs.
Please help me solve this problem.
Maybe i need upgrade to 1.4.2.x
02-08-2016 03:22 AM
We could not make LAG link operational between stack of SG500 switches and ESXi hosts. With other network equipment there were no issues, whatsoever.
Instead, we had implemented a workaround solution:
- We took apart LAG links between SG500s and ESX hosts and connected each stack member with single UTP link to separate network card on ESXi host
- On ESXi side, in NIC teaming options, we have set Load balancing algorithm "Route based on the originating virtual port ID"
This combination gave us the possibility for failover should any of the NICs or UTP links fail.
Hope that helps!
02-08-2016 04:06 AM
In our case to distributed over stack LAGs connected four another switches,
each with two links - in Master 's and in Backup's LAG ports.
This Master/Backup SG500 is a network core
05-21-2018 09:02 AM
me too
just in case somebody stumbles over this, I have created a service request 684517426
06-15-2018 09:58 AM
I worked with the support team and it is a confirmed bug.
The id is CSCvj91570
for those with access, you can look it up and follow it here
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide