cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements
1405
Views
15
Helpful
5
Replies
cmjr86
Beginner

ACI Fabric - connectivity to Telco appliances (MME,SPGW,...)

Hello,

 

I work in OSS area and I'm trying to model Data Center ACI Fabric in the context of Telco Operator scenario, for example having platforms like MME or SPGW - part of Core scenario - virtualized.

https://www.ciscolive.com/c/dam/r/ciscolive/emea/docs/2019/pdf/BRKACI-3620.pdf

 

 

First question: VxLAN will be configured only between spines or VxLAN is also at leaf level?

Picture: https://paste.pics/0201693319fe6f30743e4151188833f4

 

Second - how an interface like S1-MME or S1-U is routed in Data Center - from Firewall till MME/SPGW function in a Virtual Machine? From eNodeB till firewall (e.g. ASR903) S1-MME is "encapsulated" in IPsec.

Picture: https://paste.pics/621dd13e04b6a66f4e436de1dca7f60c

 

Between ASR903 and ASR9000 - 40/100 GB IP Link? And then from ASR9000 to the spines?

And from spines to leafs and finally to the MME network function?

 

There are 100/40GB Ethernet Links but S1-MME will not be routed over each of the link but over some routing protocol, vxlan, whatever, no?

 

Simple case without ACI Fabric and without IPsec: https://paste.pics/43cc9555d4d3bf8d94a7041ef8825753

 

I will appreciate any help.

3 ACCEPTED SOLUTIONS

Accepted Solutions
joezersk
Cisco Employee

Hello.  I'll have a go but first I must shyly admit that I have no SP experience, where most of my work has been in the Enterprise.  While I might not be able to understand / answer everything you've presented here, I think I can add some conceptual knowledge about ACI that applies to all topologies.

First, because you work in an SP environment, it might help to think about VxLAN as "MPLS for the DC Switched Network".  I say that because fundamentally they are both designed to solve a similar set of problems (although one in the WAN and another in the DC).  Because ACI is multi-tenant, we need a way to provide L2/L3 reachability across what is a purely routed DC fabric and keep all the tenants separate from each other.  Instead of an MPLS tag, we can use VNIDs and other tags in ACI, which are available to us with VxLAN. 

With that pre-understanding behind us, the answer to your first question "VxLAN will be configured only between spines or VxLAN is also at leaf level?" is that we use VxLAN exclusively between leafs and spines only (spines don't connect directly to other spines).  This is to say that any endpoints you attach to ACI don't need to understand VxLAN and can work as they have always worked, using standard Ethernet/IP and VLANs.  While I am taking great liberties with how MPLS works, conceptually it is useful to view your ACI leafs as MPLS-CE routers.  When an endpoint (ACI-speak for anything you connect to a leaf port where we learn MAC and/or IP regardless of form factor, like VM or baremetal) wants to speak to another endpoint that lives on another leaf, the leaf encaps that traffic into a VxLAN packet and using the right VNID (ACI's tagging method) we can send it along across the spines, until it gets to the far side leaf, where it is de-encapsulated from VxLAN, and back into a normal 802.1q ethernet frame and forwarded down to the correct receiving endpoint.  I am oversimplifying this process but conceptually it works like that. 

So your second question "how an interface like S1-MME or S1-U is routed in Data Center...considering IPSEC", to ACI it really doesn't matter.  As long as ACI knows how to reach the IPSEC tunnel endpoints, it will happily send traffic as it normally does.  Obviously, ACI won't know what's inside the IPSEC tunnel, but it will view tunnel endpoints as regular ACI endpoints, without needing anything special.

I want to have a side discussion on your design (second image).  Normally, all endpoints, including routers that lead to the outside world or other DCs, are attached to the leafs.  We don't connect endpoints to spines, only leafs connect to spines.  This holds true in ACI until you start talking about advanced solutions we call Multi-Pod and Multi-Site.  I am not sure if you are considering those solutions for ACI, so I don't want to write too much here.  Just note that because you have multiple DCs, these solutions go along way to making that DCI and Operation a lot easier than it has been in the past.  I suggest you check them out to know what your options are. 

So to summarize, ACI will view your Apps/Interfaces/VMs/Whatever as normal Ethernet/IP based endpoints.  It will happily forward traffic for them (assuming you tell ACI to do this) just as it has always been done in the traditional networking past. But now we have the benefit of VxLAN as a way to scale, and keep traffic separate as it moves across a multi-VRF, multi-tenant ACI core, just like MPLS tags do in the WAN. 

I hope that made sense ;)  I also ask forgiveness from any SP wizards reading this for my gross oversimplification of things like MPLS and VxLAN. 

View solution in original post

Wow... thanks @joezersk for the reply and comprehensive answer. :)

I need to "seat" and read carefully... I believe we can have a side discussion to check some details.

View solution in original post

Hello again.  The answer to the first question, based on your very Picasso-like drawing (meant as a compliment), is yes, VxLAN between leafs and spines always, and standard Eth/IP between the leaf's front panel ports and whatever end point is connected there. 

About DC Interconnect, first thing is to forget about anything that says "Stretched Fabric".  That is an older design that has since been improved upon and replaced by Multi-Pod and/or Multi-Site.  Multi-pod and -site are the current and best designs for those cases where you have more than one DC.  In those designs we do connect spines to other spines, but across a generic L3 routed network (like your MPLS core).  In both designs, we use MP-BGP as our control plane (to send MAC/IP reachability across sites), and we use VxLAN as our data plane.  This answers your last question.  To be clear, in a single fabric, leafs only connect to spines, full stop.  No spines to spines, no leafs to leafs.  In a multi-pod or -site design, we connect the spines up to a WAN, which you could argue represents "another kind of leaf" in order to reach the spines in the other sites.

View solution in original post

5 REPLIES 5
joezersk
Cisco Employee

Hello.  I'll have a go but first I must shyly admit that I have no SP experience, where most of my work has been in the Enterprise.  While I might not be able to understand / answer everything you've presented here, I think I can add some conceptual knowledge about ACI that applies to all topologies.

First, because you work in an SP environment, it might help to think about VxLAN as "MPLS for the DC Switched Network".  I say that because fundamentally they are both designed to solve a similar set of problems (although one in the WAN and another in the DC).  Because ACI is multi-tenant, we need a way to provide L2/L3 reachability across what is a purely routed DC fabric and keep all the tenants separate from each other.  Instead of an MPLS tag, we can use VNIDs and other tags in ACI, which are available to us with VxLAN. 

With that pre-understanding behind us, the answer to your first question "VxLAN will be configured only between spines or VxLAN is also at leaf level?" is that we use VxLAN exclusively between leafs and spines only (spines don't connect directly to other spines).  This is to say that any endpoints you attach to ACI don't need to understand VxLAN and can work as they have always worked, using standard Ethernet/IP and VLANs.  While I am taking great liberties with how MPLS works, conceptually it is useful to view your ACI leafs as MPLS-CE routers.  When an endpoint (ACI-speak for anything you connect to a leaf port where we learn MAC and/or IP regardless of form factor, like VM or baremetal) wants to speak to another endpoint that lives on another leaf, the leaf encaps that traffic into a VxLAN packet and using the right VNID (ACI's tagging method) we can send it along across the spines, until it gets to the far side leaf, where it is de-encapsulated from VxLAN, and back into a normal 802.1q ethernet frame and forwarded down to the correct receiving endpoint.  I am oversimplifying this process but conceptually it works like that. 

So your second question "how an interface like S1-MME or S1-U is routed in Data Center...considering IPSEC", to ACI it really doesn't matter.  As long as ACI knows how to reach the IPSEC tunnel endpoints, it will happily send traffic as it normally does.  Obviously, ACI won't know what's inside the IPSEC tunnel, but it will view tunnel endpoints as regular ACI endpoints, without needing anything special.

I want to have a side discussion on your design (second image).  Normally, all endpoints, including routers that lead to the outside world or other DCs, are attached to the leafs.  We don't connect endpoints to spines, only leafs connect to spines.  This holds true in ACI until you start talking about advanced solutions we call Multi-Pod and Multi-Site.  I am not sure if you are considering those solutions for ACI, so I don't want to write too much here.  Just note that because you have multiple DCs, these solutions go along way to making that DCI and Operation a lot easier than it has been in the past.  I suggest you check them out to know what your options are. 

So to summarize, ACI will view your Apps/Interfaces/VMs/Whatever as normal Ethernet/IP based endpoints.  It will happily forward traffic for them (assuming you tell ACI to do this) just as it has always been done in the traditional networking past. But now we have the benefit of VxLAN as a way to scale, and keep traffic separate as it moves across a multi-VRF, multi-tenant ACI core, just like MPLS tags do in the WAN. 

I hope that made sense ;)  I also ask forgiveness from any SP wizards reading this for my gross oversimplification of things like MPLS and VxLAN. 

View solution in original post

Wow... thanks @joezersk for the reply and comprehensive answer. :)

I need to "seat" and read carefully... I believe we can have a side discussion to check some details.

View solution in original post

Thanks again @joezersk 

So outside ACI will have using standard Ethernet/IP and VLANs i.e between leaf and host and from Spine to outside (ASR9000, Firewall, etc) - correct?

https://i.paste.pics/80806796510a95d5c34f6ddf8e6e68cc.png

 

Additional questions - how about multi site DC connectivity - how it works? Like that:

https://www.routexp.com/2018/09/introduction-to-cisco-aci-stretched.html

But I see some pictures where we have VxLAN between spines:

https://www.wwt.com/api/attachments/5db9d528b4af20008756ecd0/file

 

Based on your explanation, the correct approach will be spines connecting to the leafs of the local site and external sites and spines "connected" via MP-BGP-EVPN - this is correct?

Hello again.  The answer to the first question, based on your very Picasso-like drawing (meant as a compliment), is yes, VxLAN between leafs and spines always, and standard Eth/IP between the leaf's front panel ports and whatever end point is connected there. 

About DC Interconnect, first thing is to forget about anything that says "Stretched Fabric".  That is an older design that has since been improved upon and replaced by Multi-Pod and/or Multi-Site.  Multi-pod and -site are the current and best designs for those cases where you have more than one DC.  In those designs we do connect spines to other spines, but across a generic L3 routed network (like your MPLS core).  In both designs, we use MP-BGP as our control plane (to send MAC/IP reachability across sites), and we use VxLAN as our data plane.  This answers your last question.  To be clear, in a single fabric, leafs only connect to spines, full stop.  No spines to spines, no leafs to leafs.  In a multi-pod or -site design, we connect the spines up to a WAN, which you could argue represents "another kind of leaf" in order to reach the spines in the other sites.

View solution in original post

Clear! Thanks again @joezersk for your valuable inputs!