cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8935
Views
17
Helpful
115
Replies

How can I pass another complete network over my network

dajohnso
Level 1
Level 1

I have a network that connects two buildings with switch A and switch B. These two switches are connected with a pair of trunks using 1G dishes. I have about 10 vlans configures in my environment and everything works fine for my equipment and networks. I have a need to connect a 3rd parties network (switch C and D) over my network but I dont want to intermix the two networks. I have verified they have cisco switches as well and none of the vlans overlap (except we both have native vlan 1, I know, I'm already considering changing to a different native vlan). How can I bridge their switches using my link? I tried turning off lldp and connecting their trunk to my switch on an unused VLAN and that didnt work. I changes my ports to trunks and only allowed vlans 1,400,600,1002-1005 and I was able to ping from C to D (over A & B) with different IP blocks then I use in my network but when I added a vlan to switch C it did not show up on switch D vtp but they do when they are directly connected? So in short how can I connect C & D over my A & B so that C&D do not show up as cdp nei in my environment and  C&D do not show my switches in cdp nei and so that all traffic and admin of C can reach D? I don't need to (or want to) be able to access anything on C or D and I certainly do not want C&D to see anything in my network. I can make any changes to my environment needed but I can only make suggestions for changes on the vendor network. Would it help if I moved everything in vlan 1 to vlan 2 and made it my native/management vlan? I want the connection between C&D to be as transparent as possible almost like they are directly connected. (I would rather NOT put them on one set of dishes and me on the other, I like having the redundant pair of dishes even though I do not need 2Gbps throughput, they are on a  port channel group for load balance and redundancy)

dajohnso_1-1745769954962.png

 

 

 

115 Replies 115

Oh, and at the risk of stating what you may already know, you're not limited to just connecting two external switches, sharing VLANs, for any one external customer.

Please expand on this. Not sure I understand what your stating here. Are you indicating that I can have 2 or more vendor switches in the same dot1q vlan or I can have multiple switches in a network connected to a vendor core C and multiple switch in a vendor core D (keeping the model discussed above) bridged via the dot1q tunnel on my network? I was under the impression the dot1q tunnel I created was a point to point type connection to bridge 2 "vendor" networks. Just thought of another question, if this statement is true, then lets say switch C and D have 2 trunks each, I assume each trunk would be in its own dot1q "access vlan" and not the same vlan, which I assume would confuse the switches?.

Perhaps your questions are better explained in Cisco's IEEE 802.1Q Tunneling , such as:

The IEEE 802.1Q Tunneling feature is not restricted to point-to-point tunnel configurations. Any tunnel port in a tunnel VLAN is a tunnel entry and exit point. An 802.1Q tunnel can have as many tunnel ports as are needed to connect customer switches.

I've harped a couple times on these .1Q tunnel ports should not be thought of as either access ports or trunk ports, although they share some characteristics of those port types.

Possibly one way to think of these tunnel interfaces they provide ports to a virtual hub that supports external VLANs.

Or, remember, internally, there's just one VLAN being passed about just like any other internal single VLAN.

However, if you want to use certain p2p L2 features, you need to insure you limit your .1Q tunnel topology usage.

ok, but I assume then the dot1q as a multipoint connection wouldn't work for trunks, not sure how multiple switches with trunks connected to the same dot1q tunnel would function? But I can understand how if multiple switches have a vlan overlap how a dot1q tunnel could help pass that vlans traffic across my network bridging multiple vendor switches in the same tunnel.


@dajohnso wrote:

ok, but I assume then the dot1q as a multipoint connection wouldn't work for trunks, not sure how multiple switches with trunks connected to the same dot1q tunnel would function?


Yes, that wouldn't seem it should work, but I suspect it would (I can't confirm my suspicion as I cannot "lab" it).

However, what if any one trunk's egress traffic, to a specific .1Q tunnel, is "flooded" to all the other corresponding .1Q tunnels?

Undesired "flooded" unicast traffic isn't unknown to switches.  They just will either flood each received frame to all same VLAN ports, except ingress port, or they will send frame to a specific egress port.  I.e. nothing the switch cannot correctly handle.

The only downside, frames will be sent to all other .1Q tunnel ports, although we only desire them to be sent to one specific .1Q tunnel port.

If you consider this not too efficient, it's not, but that's also why since the introduction of L3 switches, L3 topologies are generally recommended over L2 topologies, even down to being used for user network access edge devices.

dajohnso
Level 1
Level 1

Since we were discussing the "feature" of 15.0 that prevented the vlan 1 from connecting across the tunnel, I found another "feature" on this version of IOS. I can have fiber trunks, and I can have copper trunks, but you cant have both in the same channel group. It will show all the trunks UP  and connected but it will only use all the fibers, or all the copper, but not both, also, if all trunks are up and the traffic passing on the fiber and you unplug the fiber links it doesn't fail over to the copper ones even when "show" commands indicate they are all up and available, just wont pass traffic. I'm interested in seeing if  this is also fixed when I upgrade my core to 4948E. I have looked at the configs and status of all ports and they were set to the same characteristics as required. I did a lot of reading on this and all indication points to this "should work" if they are the same attributes for speed, duplex, etc... I couldn't find anything that was set different but it would only work on which ever link was "activated" first on bootup. A work around for me was to put all fibers in one channel group and all the copper in a second and let spanning tree turn off one set of links but in the event of failure (of all one type) the other one did come up (i.e. if the fiber got cut, the coppers came online, but only if all the fibers went offline). In my original post, if you recall, I had 2 fibers connecting the switch and then 2 copper connection (via dishes) connecting the same 2 switches. So I got redundancy but not additional bandwidth.

Not an issue since the fiber is going away in this case but I had originally connected all 4 links so it would continue to work when the fiber is removed. Since I am changing both sides of this link to 4948E I can perform a test to see if this "feature" was also corrected.

BTW, I recall on the 6500 series, by default, you couldn't have channel groups across ports in different line cards with different card architectures, even if ports were same speed and all copper or fiber.  So, it may truly have been an (unnecessary) "feature".

Jens Albrecht
Spotlight
Spotlight

Let's add another lab setup with a few more switches for Customer 1:

JensAlbrecht_0-1746479617103.png

The configuration is same as before. The ISP Core switches A and B use the same dot1q-tunnel config with Vlan 10 as native vlan for all connections to Customer 1. Switches C, D, E and F at the customer site use a simple trunk with native vlan 1.

The logical topology for Customer 1 looks as if all his switches were connected to an unmanaged switch that transparently passes all traffic including the L2-control traffic for CDP, LLDP, STP and VTP.

So when you do a "show cdp neighbor" e.g. on switch C, it will show switches D, E and F as neighbors.
Of course, you also have full IP connectivity in the vlans 1, 20, 40 with isolation from the ISP Core and any other customer even in case of overlapping vlans.

Note: Things do change if you want/need to use the protocols LACP, PAGP or UDLD as those require a point-to-point design.
As the official documentation states:
To avoid a network failure, make sure that the network is a point-to-point topology before you enable tunneling for PAgP, LACP, or UDLD packets. 

@Jens Albrecht this was on actual hardware, correct?

The logical topology for Customer 1 looks as if all his switches were connected to an unmanaged switch that transparently passes all traffic including the L2-control traffic for CDP, LLDP, STP and VTP.

Seems to work exactly what I expected.

BTW, nice logical topology diagram.

Do want to re-emphasis, the logical "unmanaged switch", or I also called it a "virtual hub", because, I believe, every "external" device traffic, injected into any .1Q tunnel port will be flooded to all the other (corresponding) .1Q tunnel ports for egress.  This because, for example, in the logical diagram, if Switch C had traffic it "knows" it want to send to Switch F, the .1Q tunnels don't see that.


@Joseph W. Doherty wrote: this was on actual hardware, correct?

Yes! Similar to the previous labs I used 2 x 9200 switches for the ISP Core and 3560/3750 switches for the customer network.

CML appears to reach its limits as these labs get a bit more complex. When I started configuring this setup in CML, initially things looked good. However, I could not finish testing so saved the lab and wanted to continue the next day. After restart of the lab the L2-control protocols were no longer transmitted over the trunk between switch A and B. So protocols like CDP and VTP only worked between switches C and E and between D and F but every switch only had 1 neighbor according to CDP. Network traffic was still fine so that e.g. switch C could ping all 3 switches in the customer vlans but the control traffic stopped working. So for this kind of lab hardware is the only way to go.


@Joseph W. Doherty wrote:

Do want to re-emphasis, the logical "unmanaged switch", or I also called it a "virtual hub", because, I believe, every "external" device traffic, injected into any .1Q tunnel port will be flooded to all the other (corresponding) .1Q tunnel ports for egress.  This because, for example, in the logical diagram, if Switch C had traffic it "knows" it want to send to Switch F, the .1Q tunnels don't see that.


Sorry, but I cannot agree on this one.

The ISP Core switches do not just blindly flood all traffic like your description of a "virtual hub" implies.

I called the logical ISP device an "unmanaged switch" with intent because in fact the regular MAC address learning rules do apply for this device. As you expect from a switch multicast and unknow unicast traffic are flooded but unicast traffic is not!

Both ISP Core switches learn MAC addresses as traffic from Customer 1 passes through the core network. The dot1q-tunnels in the above topology use Vlan 10 as their native vlan so that all MAC addresses from Customer 1 devices appear in the MAC address-table for Vlan 10.

This behavior can be easily proven when looking at the output from the physical lab.

MAC addresses of Customer 1 devices:

SW_C#sh int vlan 1 | in bia
  Hardware is EtherSVI, address is ec30.91f8.9940 (bia ec30.91f8.9940)
SW_C#sh int vlan 40 | in bia
  Hardware is EtherSVI, address is ec30.91f8.9942 (bia ec30.91f8.9942)
SW_C#

SW_D#sh int vlan 1 | in bia
  Hardware is EtherSVI, address is 0026.52a6.11c0 (bia 0026.52a6.11c0)
SW_D#sh int vlan 40 | in bia
  Hardware is EtherSVI, address is 0026.52a6.11c2 (bia 0026.52a6.11c2)
SW_D#

SW_E#sh int vlan 1 | in bia
  Hardware is EtherSVI, address is 0017.948d.0bc0 (bia 0017.948d.0bc0)
SW_E#sh int vlan 40 | in bia
  Hardware is EtherSVI, address is 0017.948d.0bc1 (bia 0017.948d.0bc1)
SW_E#

SW_F#sh int vlan 1 | in bia
  Hardware is EtherSVI, address is 0012.01d4.9b40 (bia 0012.01d4.9b40)
SW_F#sh int vlan 40 | in bia
  Hardware is EtherSVI, address is 0012.01d4.9b41 (bia 0012.01d4.9b41)
SW_F#

After pinging each device from SW_C in vlan 1 and 40 the MAC address-table of the ISP Core switches shows the expected entries:

SW_A#sh mac address-table vlan 10
          Mac Address Table
-------------------------------------------

Vlan    Mac Address       Type        Ports
----    -----------       --------    -----
  10    0012.01d4.9b02    DYNAMIC     Gi1/0/24  <-- Int Gi0/1   - SW_F
  10    0012.01d4.9b40    DYNAMIC     Gi1/0/24  <-- Int vlan  1 - SW_F
  10    0012.01d4.9b41    DYNAMIC     Gi1/0/24  <-- Int vlan 40 - SW_F
  10    0017.948d.0b82    DYNAMIC     Gi1/0/2   <-- Int Gi0/1   - SW_E
  10    0017.948d.0bc0    DYNAMIC     Gi1/0/2   <-- Int vlan  1 - SW_E
  10    0017.948d.0bc1    DYNAMIC     Gi1/0/2   <-- Int vlan 40 - SW_E
  10    0026.52a6.1181    DYNAMIC     Gi1/0/24  <-- Int Gi0/1   - SW_D
  10    0026.52a6.11c0    DYNAMIC     Gi1/0/24  <-- Int vlan  1 - SW_D
  10    0026.52a6.11c2    DYNAMIC     Gi1/0/24  <-- Int vlan 40 - SW_D
  10    ec30.91f8.9901    DYNAMIC     Gi1/0/1   <-- Int Gi0/1   - SW_C
  10    ec30.91f8.9940    DYNAMIC     Gi1/0/1   <-- Int vlan  1 - SW_C
  10    ec30.91f8.9942    DYNAMIC     Gi1/0/1   <-- Int vlan 40 - SW_C
Total Mac Addresses for this criterion: 12
SW_A#

SW_B#sh mac address-table vlan 10
          Mac Address Table
-------------------------------------------

Vlan    Mac Address       Type        Ports
----    -----------       --------    -----
  10    0012.01d4.9b02    DYNAMIC     Gi1/0/2   <-- Int Gi0/1   - SW_F
  10    0012.01d4.9b40    DYNAMIC     Gi1/0/2   <-- Int vlan  1 - SW_F
  10    0012.01d4.9b41    DYNAMIC     Gi1/0/2   <-- Int vlan 40 - SW_F
  10    0017.948d.0b82    DYNAMIC     Gi1/0/24  <-- Int Gi0/1   - SW_E
  10    0017.948d.0bc0    DYNAMIC     Gi1/0/24  <-- Int vlan  1 - SW_E
  10    0017.948d.0bc1    DYNAMIC     Gi1/0/24  <-- Int vlan 40 - SW_E
  10    0026.52a6.1181    DYNAMIC     Gi1/0/1   <-- Int Gi0/1   - SW_D
  10    0026.52a6.11c0    DYNAMIC     Gi1/0/1   <-- Int vlan  1 - SW_D
  10    0026.52a6.11c2    DYNAMIC     Gi1/0/1   <-- Int vlan 40 - SW_D
  10    ec30.91f8.9901    DYNAMIC     Gi1/0/24  <-- Int Gi0/1   - SW_C
  10    ec30.91f8.9940    DYNAMIC     Gi1/0/24  <-- Int vlan  1 - SW_C
  10    ec30.91f8.9942    DYNAMIC     Gi1/0/24  <-- Int vlan 40 - SW_C
Total Mac Addresses for this criterion: 12
SW_B#

Of course, beside the MAC addresses of the SVIs the address-table also includes the MAC addresses of the physical interfaces of switches C, D, E and F as the source of the L2 protocol control frames.

Hence unicast traffic from Customer 1 is not flooded but filtered based on the MAC address-table and only forwarded to the port where the destination device is actually connected.

Multicast and unknown unicast traffic must be flooded and that becomes important when the customer wants to run Etherchannels across the ISP Core. Then the design has to follow a strict point-to-point topology but that's another story and I leave it for an upcoming separate post.

Hope I made my point clear, otherwise please let me know.

My pre this edit reply either still in error and/or confusing in its description of a potential "issue".  Tonight, I'll try again.

Hope I made my point clear, otherwise please let me know.

You have!  Had to pondered what was happening, to realize my mistake.  Which was, since we're hiding original VLANs, I had a L3 mindset, that the original MACs would be encapsulated too.  Of course, that's not the case.

You description treating the internal tunnel as a switch is very apt.  Unfortunately, believe this internal switch is "missing" a couple of features.

Consider a "normal" set of VLAN switches, SW1<>SW2<>SW3.  The interconnection are all trunks.  SW1 has VLANs 11 and 12, SW3 has VLANs 31 and 32, and SW2 has VLANs 11, 12, 21, 22, 31 and 32. So, what happens, if a SW2 host, in VLAN 21, ARPs for another host in VLAN 21? That broadcast (by default) is sent down the trunks to both SW1 and SW3, and then ignored by those switches.  To make that more efficient, we can prune VLANs off the trunks, but I suspect we have a somewhat similar issue with .1Q ports, that cannot be pruned.  I suspect, any frames injected into a tunnel port, that we don't have a specific tunnel egress port for, will be flooded to all tunnel egress ports.

Given:

JosephWDoherty_0-1746568335350.png

Where switch0 is the tunneling switch, switch1 has VLANs 20 and 30, switch2 just has VLAN 20 and switch3 just has VLAN 30, I suspect a broadcast on switch1, made on VLAN 20, will be transmitted to both switch2 and switch3 just as if all the interfaces were all trunk ports.  But, on real trunk ports, we could, on switch0, limit its egress to switch2 to only allow VLAN 20 and also limit its egress to switch3 to VLAN 30.

Probably not usually a significant problem, but there is a reason for pruning VLANs off trunk ports.

Another interesting case, which is very highly unlikely to be encountered, say on switch2 and switch3 each have hosts, that have the same MAC (via LAA).  Normally, this is a non issues, because the MAC is unique to per VLAN.  However, switch0 will (?) have that MAC in the tunnel VLAN, but known on two separate egress ports.  So, switch1 sending to that MAC on VLAN 10, will switch 0 send it out to both switches 2 and 3?  If it does, each switch will determine whether the incoming frame was intended for VLAN 10, so only switch 2 will forward the traffic to its (correct) host, but all the replicated traffic is consuming bandwidth to switch 3.

Since this L2 tunneling isn't limited to p2p, it can have issues that wouldn't normally be issues.

 


@Joseph W. Doherty wrote:

Consider a "normal" set of VLAN switches, SW1<>SW2<>SW3.  The interconnection are all trunks...


The point is that dotq-tunnel interfaces are no trunk ports! That's why the "switchport trunk native vlan <num>" command has no effect at all but instead you need to configure the "switchport access vlan <num>" command. These ports only process traffic for a single vlan, i.e. the one you configured as access vlan (Statement refers to traffic coming from the backplane. All traffic from the customer is double-tagged with this vlan, of course).

Let's say you configured "switchport access vlan 100" on port Gi1/0/1. Then the backplane of the ISP Core switch will only send traffic for vlan 100 to this port. It will not send any traffic for any other vlan to this port so there is no need to think about pruning or consuming bandwidth by unnecessary replicated traffic. This simply does not happen.

If the switches were flooding traffic for different vlans over the dot1q-tunnel interfaces, then it would become impossible to run Etherchannels across the ISP Core network. Etherchannels require a strict point-to-point connection. If this is not the case they either stay down in case of LACP/PAGP or quickly go into err-disabled state in case of ON.

Things should become clearer once I had the time to finish the Etherchannel QinQ setup, probably tomorrow.

The point is that dotq-tunnel interfaces are no trunk ports!

Agreed!  In fact, I'm emphasized that point a couple times in my prior replies.

However these QinQ tunnel interface ports, do accept multi VLAN traffic, from trunk ports, and also transmit multi VLAN tagged traffic, to trunk ports.  So as I also wrote earlier, they are not classical trunk ports but do share some attributes of trunk ports.

Let's say you configured "switchport access vlan 100" on port Gi1/0/1. Then the backplane of the ISP Core switch will only send traffic for vlan 100 to this port. It will not send any traffic for any other vlan to this port so there is no need to think about pruning or consuming bandwidth by unnecessary replicated traffic. This simply does not happen.

I'm not so sure that's always the case.

Let's again consider this topology:

JosephWDoherty_0-1746633164075.png

Let's initially assume, switches 1, 2 and 3 have VLANs, and all are using a trunk port to switch 0.  Switch 0 uses VLAN 100, as its  tunnel VLAN, internally, to interconnect the 3 switches.

Switch 1 ARPs (a broadcast) for a VLAN 10 host.  Where does the ARP broadcast go? Would it also be transmitted to switches 2 and 3, tagged as VLAN 10, to each?

If yes, suppose switch 3 has no VLAN 10 ports.  Does this preclude switch 0 from still transmitting the ARP tagged with VLAN 10 to both switches 2 and 3?

Isn't this the same issue, if instead of QinQ, switch 0 was using "normal" trunk ports?

The difference, though, with "normal" trunk ports, I can prune (block) VLAN 10 traffic to switch 3.  I would do this because I know VLAN 10 isn't being used on that switch, and I don't want to send needless VLAN 10 traffic to switch 3.  (With non QinQ, pruning done by VTP would also be an option.)

But, how do we prune VLAN 10, to switch 3, on a QinQ tunnel port?

For something like ARPs, this is unlikely to be a major problem, but it applies to all broadcast traffic.  Multicast traffic, might have a similar issue.

Lastly, there's the remote chance we could see the issue with unicast flooding.  (Which is why, some design guides recommend, when using L3 distro connecting to L2 access, don't extend/allow L2 access VLANs to span across multiple distro<>access links.)

If effect, I believe, from the view point of switches 1, 2 and 3, they are logically trunk connected to a shared (virtual) switch, and will obtain similar behavior, but unlike a physical shared switch, we cannot (can we?) block VLANs leaving a particular QinQ tunnel port.

In typical practice, I doubt the forgoing would be a practical issue, for one reason, we'll likely limit traffic being sent to a QinQ tunnel port to specific VLANs.  I.e. switches 1, 2 and 3 all use VLAN 10 or VLAN 20, not switch 1 connects to a QinQ tunnel port, using VLANs 10 and 20, but switch 2 only connects with VLAN 10 while switch 3 only connects with VLAN 20.

So again, I would need further explanation, why "This simply does not happen.", assuming it also means it can never happen.

 

You mention LACP, but I recall QinQ mentions you need to use it carefully, like restricting overall QinQ topology, for it to work correctly.  I recall, one example of using LACP with QinQ, would be in a physical topology like (which I'm not endorsing as being recommended):

JosephWDoherty_1-1746634543151.png

Where switches 1 and 2 provide the QinQ tunnel ports.

Where one LACP link is SW0<>SW1<>SW2<>SW3 using, say VLAN 100 on switches 1 and 2.

Where another LACP link SW0<>SW2<>SW1<>SW3 using, say VLAN 101 on switches 1 and 2.

Where switches 0 and 3 see their two links as "normal" Etherchannel, optionally carrying multiple VLANs.