The switch would create a DHCP snooping binding entry once it sees a DHCPACK. Normally if the DHCP server is sitting outside of the switch, the DHCP ACK would be received from a trusted port. In case when local DHCP server is configured on the switch, the DHCP ACK is locally generated which could have been the reason that the binding table is empty on your 3550A.
That said, in my opinion this seems to be an incorrect behavior even with local DHCP server. I'd suggest you to log a case with TAC and get it addressed fully.
Many thanks for reply.
3550A switch is showing dhcp snooping binding now.
3550SMIA#sh ip dhcp snooping binding
MacAddress IpAddress Lease(sec) Type VLAN Interface
------------------ --------------- ---------- ------------- ---- ---------- ----------
44:2A:60:A6:E6:CC 192.168.20.39 54874 dhcp-snooping 20 FastEther net0/20
00:1E:4C:3B:4C:1E 192.168.20.8 60388 dhcp-snooping 20 FastEther net0/20
F4:CE:46:67:9B:45 192.168.20.34 68141 dhcp-snooping 20 FastEther net0/20
00:25:BC:05:27:91 192.168.20.32 85753 dhcp-snooping 20 FastEther net0/20
34:08:04:99:98:1C 192.168.20.35 47271 dhcp-snooping 20 FastEther net0/20
00:40:F4:18:6D:8A 192.168.20.33 65628 dhcp-snooping 20 FastEther net0/15
Total number of bindings: 6
I have a doubt with regards to HSRP and STP.
There are two switches 01 and 02, and say for example there is a VLAN21 and HSRP is running on this VLAN.
I configure the vlan21 to be ACTIVE on 02 switch, but on the spanning-tree vlan priority the root is configured to be 01 switch.
What happens with this setup?
Awaiting your reply.
( Matt/Jane - hope you don't mind me jumping in ! )
Firstly it doesn't stop it working. However it depends on how the rest of the network is connected.
Lets say you have 2 switches - sw1 and sw2 and these have an access-layer switch as1 connected to it.
sw1 and sw2 are connected via L2 trunk.
sw2 is hsrp active for vlan 21 and sw1 is STP root for vlan 21.
Now this is a common L2 access-layer / distro design with the inter-vlan routing happening on sw1/sw2. In this design the interconnection between sw1 and sw2 will always be forwarding.
as1 is connected to both sw1 and sw2 with L2 trunks. So you have a L2 loop and this means that one of the links must blocked. Because sw1 is the STP root for vlan21 then the as1 -> sw1 link will be forwarding and the as1 -> sw2 link will be blocked. A client PC connected to as1 in vlan 21 now wants to send traffic to a server on a different vlan. So the client PC needs to send the traffic to it's default-gateway. sw2 is the active switch for the clients default-gateway. But as1 is blocking it's link to sw2 so the traffic path is -
as1 -> sw1 -> sw2
rather than the optimal path of as1 -> sw2. So it still works, it's just that you are sending the traffic over a suboptimal path. If sw1 was HSRP active as well as STP root then the path would simply be -
as1 -> sw1
that is why it is recommended to match the HSRP active and STP root per vlan.
One last point. If you use GLBP in this scenario it suffers from the same problem regardless of which switch is STP root. That is because with GLBP both switches are active gateways so some traffic can go direct and some has to go over the sw1 -> sw2 interconnect to get to it's active gateway. A better design with GLBP is to interconnect sw1 and sw2 with a L3 link which breaks the L2 loop and allows both uplinks from as1 to be forwarding.
So in both scenarios you need to scale the interconnect between sw1 and sw2 accordingly. That is often why you find that this interconnect has more bandwidth than the uplinks from the access-layer switches.
Spanning-tree portfast should never be run on trunk links between switches because portfast allows the port to being forwarding immediately and this could create L2 loops if the switches are redundantly connected. Portfast ports still run spanning-tree but by the time it has worked out there is a loop it may be too late.
However there are some trunk links that you can safely run portfast on. These links would be trunks to servers, a trunk to a router/firewall etc. For these type of connections the spanning-tree portfast trunk command was created because it is safe to run portfast on this link as there should be no L2 loop created.
But as i say, you should not run this command on trunk links between switches.
Matt / Jane
Following on from the question about HSRP active / STP root i was just wondering about the design where you interconnect your distribution switches with a L3 link. This means obviously that both L2 uplinks from access-layer switches can be forwarding at the same time so you can get more bandwidth from the access-layer and if you use GLBP for example you can actually load balance between your distro switches.
My concern with this design has always been that this means GLBP/HSRP messages then have to go via the access-layer switches because there is no direct L2 adjacency between the distro switches. I can't give any specific technical reason why this is bad but i have never felt comfortable with the access-layer switches being used as transit switches between the 2 distro switches.
Are there are specific technical reasons why this is a bad thing to do and is it a validated Cisco design to do this.
I appreciate you can actually go further and use L3 from the access-layer and with the advent of MEC on the 3750s, VSS and VPc you can design around this but i would still be interested to hear if the above is a valid design.
Thanks for pinch hitting for us on the weekend! You are more than welcome to join in conversations here. As for this kind of design with the layer-3 interconnect this is a valid design when you want to remove STP from the picture. Typically in this kind of scenario you will see that each access-layer segment has it's own dedicated vlan which does not extend to the other access-layer segments.
So the goal here is to remove the reliance on STP between the access-layer and the distribuition/core layer by removing the physical loop from the picture.
Thanks for running this forum. This is great stuff.
To provide context for the questions below, I am not asking about QoS or priorty queuing. I am interested in speed matching -- where packets coming in on a high speed port are switched toward a lower speed port. This is just context. . .
It is my understanding that current 3750/3560 catalyst switches use port ASICs that each manage their own buffer pool. I have been told that each ASIC has 2.75 MBytes -- 2 MBytes for output and 750 KBytes for input queuing. I have been told that one ASIC can support 24 Gig-E ports or two 10-Gig-E ports. But I have never seen this info
in print. So:
Question 1 -- is this info correct? Why is it hard to find, or am I looking in the wrong place? This info is published for 65xx line cards.
Question 2: How are the ASICs and buffer memory arranged in the all-10-Gig WS-C3560E-12D-E ? What is the largest chain of packets (in bytes) that can queue on a port with a gig-E twingig converter? What should I read to be able to answer this question myself?
For the answer to your question 1, yes this information is correct. It is not currently externally documented. Each invidivual business unit inside Cisco decides what to publish versus keep internal. In this case the business unit on the desktop switches has chosen to not publish this externally in a document.
For question two the 3560E uses an entireley different architecture from the regular 3750/3560 switches. This asic can handle two 10 gig ports per asic, and has 12mb of buffer in the egress direction. It can buffer a maximium of 8k packets at one time at packet size 1542 bytes.
I have a C4948-10GE which its sole purposeI was connecting my MCU (Cisco MSE 8000 chassis with MSE 8710 TelePrsence Server blades). While the MCU was connected to the 4948 the video quality was poor. There was an irregular pulsing affect caused responses to Fast Update Requests from the Endpoints connected to the MCU or the MCU itself. As soon as the MCU was connected to a different C4948-10GE (same config on everything and same cables) the video quality was good. Both switches running in layer 2 only, no auto QoS and the Switch ports are configured to trust the DSCP markings of the MCU.
Thanks in advance,
Sent from Cisco Technical Support iPhone App
I have seen this happen before once and it was a hardware failure. I would recommend opening up a TAC case and having it looked at, having the switch replaced and have them perform an engineering failure analysis (EFA) on it.
What happens to packets/frames, short term, that exceed the limit imposed by the srr-queue bandwidth limit # command on a 3560/3750 switch? Are they immediately dropped, like a rate-limiter, or are placed into the appropriate egress queue and just delayed?