I'll explain a little bit of what we are trying to accomplish. I work for a state service provider that supports transport for about 50 state agencies (customers). Our network currently is running MPLS (OSPF IGP) in our core with 3 route reflectors. Layer 3 MPSL VPNs are used for customer isolation. All of our PEs are route reflector clients to each of these RRs. Since we do not own media to each of our buildings, we are utilizing EVPL supplied by Verizon to connect our PEs to multiple CEs that typically house multiple customers. We have a 1 Gbps EVPL circuit at our PE and typically a 10 or 100mbps EVPL circuit at the CE location. Verizon assigns us a VLAN (EVC) for connection between the PE to each CE. We then sub interface the EVPL interface on the PE with dot1q encapsulation for the assigned EVC. The same is done on the CE side.
The issue comes with supporting multiple customers at these sites. Since we are sub interfacing the connection facing Verizon, we cannot create multiple sub interfaces (one for each customer's VRF) to connect back to the PE (VRF-Lite). If we could, then we could run eBGP for each customer. Instead these CEs peer with the RRs and have MPLS/OSPF extended down to them, essentially creating additional PEs. Our RRs are up to about 120 of these extra peers. This is clearly not very scalable either.
We have looked at a some solutions:
1)Create multiple GRE tunnels and use separate loopbacks for unique source/destination. This will work, but now we have additional loopbacks to create and tunnels for each customer per location.
2) Create multiple GRE tunnels using the same source and destination using tunnel keys. It works, but my concern is that by using keys, traffic will then be software switched rather than hardware switched, which will not be very scalable.
2) Have Verizon issue multiple EVCs, one for each customer at each site. This also will work, however, there are limitations on how many EVCs that we can assign to a single EVPL circuit at the PE. Currently a 1Gbps EVPL circuit can support up to 75 EVCs. this will not scale well.
3) Configuring Q-in-Q at both ends of the circuit in order to send multiple VLANs from the CE to the PE. This will not work since the PE side will also need to be set to Q-in-Q and there would be no way to set multiple S tags for multiple locations.
We have an onsite CCIE SP and this is stumping him as well.
Listen: https://smarturl.it/CCRS9E2 Follow us: https://twitter.com/ciscochampion
The internet has grown exponentially for over 30 years and will soon surpass 30 billion connected devices. Yet, even for its age and the all the connected devices, the Intern...
Listen: smarturl.it/CCRS8E48 Follow us: twitter.com/CiscoChampion One word describes the life of network operations, and that word is complex. It’s no wonder when your responsibility is to maintain multi-layer IP and Optical networks that span m...
The 2021 IT Blog Awards, hosted by Cisco, is now open for submissions. Submit your blog, vlog or podcast today. For more information, including category details, the process, past winners and FAQs, check out: https://www.cisco.com/c/en/us/t...
Listen: https://smarturl.it/CCRS8E39 Follow us: twitter.com/CiscoChampion5G and Wi-Fi 6, the next generation of mobile wireless technologies are here! But what does that mean? Where and how is 5G being deployed? What is Wi-Fi 6? Who’s on first? ...
loadbalancing is one of the more complex items in hardware forwarding. of course we have talked about it many years on cisco live (id 2904) with ever incrementing more detail. and there is the support forum article on loadbalancing.