cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6318
Views
20
Helpful
14
Replies

Edge Node with cascaded switch

tsgruu2000
Level 1
Level 1

Hi all,

I am quite sure this is not a recommended design. But in case we run out of access ports on an Edge node and don't have a CF capable switch at the moment: Is it possible to connect a traditional switch via trunk? 

Thanks!

14 Replies 14

Mike.Cifelli
VIP Alumni
VIP Alumni

To answer your question, this would not work inside of your SDA fabric.  Here is why:

 

In order for a device and device connection to be joined to your fabric/seen in DNAC you would need to create a L3 point to point routed link. 

 

In order to accomplish something similar to your scenario there is a solution, which is to implement an internal border node (IBN) inside your network that would basically sit between your fabric (SDA capable devices) & your legacy non-SDA capable devices.  This IBN would then basically do the translations between legacy and the fabric.  The IBN would be an iBGP peer to your EBNs.  Your legacy switch would then be a different private eBGP AS that peers with the IBN, and you would redistribute whatever networks you want accessible between fabric & non-fabric.  

 

Or you could purchase another edge node that links up to your INs for future access port expansion.  

AndiBuchmann157
Level 1
Level 1

Just my 2 Cents:

 

Fabric capabale Switches (like Cat 9k) can be stacked as as an edge node. I dont any problems there.

 

Connecting a non fabric capabale switch via trunk for "Port Expansion" - no way. 

Probably I am completely wrong, but in my understanding the Edge Node still acts as a L3-Switch. So I guess it registers upstream clients attached to the traditional switch in its MAC- and ARP-Table. Wouldn't traffic to these clients be forwarded?

I'd love to try this in a lab. Please excuse my ignorance.

IMO this wont work.. All edge nodes need to be capabale of running LISP and VXLAN. 

 

I am not sure but my guess this is necessary because otherwise the LISP map server doesnt know where to forward the traffic which destination is a l3 switch trunked to a edge node. 

 

Dont pin me down on this.. 

 

 

Hello,

We have a few options right now for adding ports:

1-If the fabric edge (FE) switch that has run out of ports is capable of stacking, then stack in another chassis

2-Or, add one or several point-to-point routed links from the fully populated FE to another new FE e.g. BORDER----INTERMEDIATE_NODE(optional)----FULLY_POPULATED_FE----NEW_FE

This was discussed recently in a different thread, FYI:
https://community.cisco.com/t5/cisco-digital-network/sda-edge-node-behind-edge-node/td-p/3750493

3-Or, deploy a traditional not-SD-Access switch external to Cisco SD-Access fabric, and connect that traditional switching domain to fabric through the border

4-Or, Extended Node might be suitable depending on time frames and use cases. Extended Node is beta right now, thus to be used for testing purposes, not production use cases at this moment. It should transition from beta to general availability (GA) in the not too distant future. If Extended Node is desirable I recommend working closely with the Cisco pre-sales representatives to get GA time frames and also have your use cases validated, since Extended Node at this time supports a subset of SD-Access use cases. IE4K, IE5K, 3560-CX and CDB can be Extended Node as per the Cisco SD-Access compatibility matrix:

https://www.cisco.com/c/en/us/solutions/enterprise-networks/software-defined-access/compatibility-matrix.html

Extended Node is configured and managed by Cisco DNA Center. It connects via an 802.1Q trunk to FE. FE takes care of registering reachability information for endpoints connected to Extended Node.

Cisco SD-Access is end-to-end solution tested by before a version is released, that is why we are prescriptive about what code versions and platforms that are allowed to be implemented together. The validated versions and platforms are listed in a compatibility matrix link I shared above.

Today #1 or #2 listed above would generally be the best solutions in the scenario you've described. #1, #2 and #3 are fully supported by TAC now. I'd encourage people not to connect a not-solution-validated switch to an FE since it's unlikely to be supported by TAC as at Jan/2019. Things may change later of course, if you're reading this in 2020 then best to find a more recent answer ;).

Cheers,
Jerome

I have a similar question, but a little different scenario.  In this case, the customer is not looking for more ports, but plans to deploy SDA in a phased approach where the first devices to join the fabric will be four different switches that each reside in a different physical location and are connected in a square via MetroE.  Each VLAN in the organization currently passes over these MetroE links via L2 and they are going to need these L2 adjacencies to remain intact after the four connecting switches are converted to fabric members.   The part I want to make sure I get right is that they plan to have legacy/non-fabric switches connecting to the edge nodes (and maybe even a border node at one location) and they have made it clear that L2 needs to span end to end.

 

I understand that VXLAN will encapsulate both L2 and L3 along with CMD for VNI/SGT propagation and that LISP can use either an IP or a MAC address for an EID, so extending L2 should not be an issue.  However, I am not clear on whether an xTR just needs to see a CAM or ARP entry to register the EID or does the end user need to physically connect to the edge node for its information to be sent to the LISP mapping system database?  In other words, if I connect a non-fabric switch to an edge node with a trunk that has multiple VLANs passing, will the edge node register each MAC address it learns on the trunk with the MS/MR and create an EID for them to pass through the overlay?

 

 

 

 

Hi Nathan,

 

Currently, on an SD-Access fabric edge switch, we support a maximum of 10 endpoints connected to each switch port. This means you cannot connect any L2 domain with more than 10 endpoints (e.g. a legacy network VLAN, or VLANs) to SD-Access fabric. This *may* change later, but definitely not in the near term.

 

Does that indirectly answer your question?

 

We could probably setup a call to explore options if you need, or you can ask your SE / AM to do that on your behalf.

 

Jerome

Jerome,

Thank you for the response. I also ran this question through some of your TMEs for SDA and was informed that connecting legacy switches to a fabric edge node is really not an option, especially when hoping to just extend L2 from a legacy network to another segment of the legacy network through the fabric.

Does that maximum of 10 endpoints maybe count for like a hypervisor with VMs attached to the trunk?  I noticed that "Server" is on of the options for provisioning a port from the host onboarding within the fabric.  Either way, that is a good limitation to be aware of (10 endpoints per switchport).

Thank you for your time and response.

Hi Nathan,

No worries :)

DNA Centre sets a server port to 802.1Q trunk with max limit of 100 endpoints on the port. This is the only exception to the max 10 endpoints I mentioned previously. The expectation is the hypervisor / server will set the correct VLAN tag to match the desired VN IP pools (aka SVIs on fabric edge switch).

Cheers!

Jerome

 

 

Hi Jerome,

 

I was searching for options regarding the connectivity of a 3rd party switch to fabric and I stumbled across this post. If DNAC configures a server port as a dot1q trunk, which has a limit of 100 endpoints, then is there any reason why the connected device cannot be a switch that is tagging the required VLAN tags that match the VN pools on the FE? I cant see how DNAC can differentiate between a hypervisor and 3rd party switch in this instance?

 

Thanks

Hello,

You are correct, DNA Centre and the SD-Access solution cannot differentiate between hypervisor and 3rd party switch setting VLAN tags downstream from server port. In this scenario you lose 802.1X / MAB on the port, and you lose assurance visibility of whatever switch is south of that 'server' port. So technically it's possible. As to whether it's supported, I'll need to ask one of the product managers to comment. Watch this space.

Jerome

Hi.

 

We have a migration scenario where we need the server port functionality and use it for some use-cases other than just a local server. Small-Office-switches, pop-up stands and dynamic projects is just some examples.  

 

I just went to Cisco-live and got info about this function but got the impression the limit was a 50 mac-adress limit?

 

 

 



Erik

I can tell you that DNAC 1.3.1.3 configures a server port like this:
interface GigabitEthernet1/0/13
switchport mode trunk
device-tracking attach-policy IPDT_TRUNK_POLICY
no macro auto processing
end
The default policy IPDT policy that gets assigned looks like this:
device-tracking policy IPDT_TRUNK_POLICY
limit address-count 100
no protocol udp
tracking enable
You have the ability to create templates to tweak the default configs via the template editor. HTH!

Oliviakin
Level 1
Level 1

Yes, you can build a point-to-point routed link between FE1 and FE2, and have them in a chain, that is allowed. It is fine on 3650, 3850, 9300 and 9400 because on these platforms any switch port can be fabric facing. For 4500, having FEs in a chain is possibly a problem, since in 4500 we must have fabric facing ports on supervisor only, and it must be Supervisor 8-E or Supervisor 9-E. Best regards, Jerome

MyBKExperience

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco