cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
47825
Views
42
Helpful
56
Comments
Lukas Krattiger
Cisco Employee
Cisco Employee

VXLAN/EVPN has been release on Nexus 9000 series in early February 2015, followed by Nexus 7000/7700 (F3/M3 Linecard) in Summer and Nexus 5600 later in 2015. Other Cisco platforms like the ASR 9000 and ASR 1000 also support VXLAN with EVPN control-plane.

As there are many request in how to configure VXLAN/EVPN on a given Platform, this Blog post should help to get you get started with a Nexus 9300/9500 (including Nexus 9x00 EX/FX)

While this example focuses on numbered IP interfaces or the so called P2P (point-to-point) approach, there is also a "ip unnumbered" example available.

Generally we would expect a Topology as shown below.

EVPN.jpg

For the sake of this example, we are using the following Topology example, which is a subset of the Topology above.

EVPN.png

The configuration example does cover the configuration of the following software components

- Underlay with OSPF, PIM Sparse (ASM) and Anycast-RP

- IP numbered interfaces (p2p interfaces)

- VXLAN

- MP-BGP EVPN Control-Plane

- VPC

We will focus on the configuration of Spine "1", Leaf "V1" and Leaf "V2"

Spine "1" Configuration:

hostname SPINE1

nv overlay evpn

feature ospf

feature bgp

feature pim

feature nv overlay

ip pim anycast-rp 10.254.254.254 10.250.250.101

ip pim rp-address 10.254.254.254 group-list 239.239.239.0/24

interface Ethernet3/1

  description Link to Leaf "V2"

  mtu 9216

  ip address 10.1.1.6/30

  ip ospf network point-to-point

  ip router ospf UNDERLAY area 0.0.0.0

  ip pim sparse-mode

  no shutdown

interface Ethernet3/2

  description Link to Leaf "V1"

  mtu 9216

  ip address 10.1.1.2/30

  ip ospf network point-to-point

  ip router ospf UNDERLAY area 0.0.0.0

  ip pim sparse-mode

  no shutdown


interface loopback0

  ip address 10.250.250.101/32

  ip ospf network point-to-point # will change OSPF interface back to Loopback; required for VPC

  ip router ospf UNDERLAY area 0.0.0.0

  ip pim sparse-mode

interface loopback254

  ip address 10.254.254.254/32

  ip ospf network point-to-point

  ip router ospf UNDERLAY area 0.0.0.0

  ip pim sparse-mode

router ospf UNDERLAY

  router-id 10.250.250.101

  log-adjacency-changes detail

router bgp 65500

  router-id 10.250.250.101

  address-family ipv4 unicast

  neighbor 10.250.250.0/24 remote-as 65500

    update-source loopback0

    address-family ipv4 unicast # optional for "show ip bgp summary" support

    address-family l2vpn evpn

      send-community both

      route-reflector-client



Leaf "V1" Configuration:

hostname LeafV1

nv overlay evpn

feature ospf

feature bgp

feature pim

feature interface-vlan

feature vn-segment-vlan-based

feature nv overlay

feature vpc

fabric forwarding anycast-gateway-mac 2020.DEAD.BEEF

ip pim rp-address 10.254.254.254 group-list 239.239.239.0/24

vlan 1,99-101,2500,3000

vlan 99

  name L2onlyHostSegment

  vn-segment 30099

vlan 100

  name L2L3HostSegment

  vn-segment 30000

vlan 101

  name L2L3HostSegment

  vn-segment 30001

vlan 2500

  name FabricBD

  vn-segment 50000

vlan 3000

  name VPCL3Peering

route-map FABRIC-RMAP-REDIST-SUBNET permit 10

  match tag 21921

vrf context TENANT1

  vni 50000

  rd auto

  address-family ipv4 unicast

    route-target both auto

    route-target both auto evpn

  address-family ipv6 unicast

    route-target both auto

    route-target both auto evpn

vpc domain 1

  peer-switch

  peer-keepalive destination 10.2.8.1 source 10.2.8.2 vrf management

  peer-gateway

  ip arp synchronize

interface Vlan100

  no shutdown

  vrf member TENANT1

  ip address 192.168.100.1/24 tag 21921

  fabric forwarding mode anycast-gateway

interface Vlan101

  no shutdown

  vrf member TENANT1

  ip address 192.168.101.1/24 tag 21921

  fabric forwarding mode anycast-gateway

interface Vlan2500

  description FabricBD

  no shutdown

  mtu 9216

  vrf member TENANT1

  ip forward

interface Vlan3000

  description VPC Layer-3 Peering for VXLAN

  no shutdown

  ip address 10.3.1.1/30 # Requires to be individual IP per VPC member

  ip ospf network point-to-point

  ip router ospf UNDERLAY area 0.0.0.0

  ip pim sparse-mode


nve infra-vlans 3000

# required for Nexus 9300-EX/FX or Nexus 9200

interface port-channel1

  description VPC Peer-Link

  switchport mode trunk

  spanning-tree port type network

  lacp suspend-individual

  vpc peer-link

hardware access-list tcam region vacl 0

# example region to free up space for arp-ether region


hardware access-list tcam region arp-ether 256 double-wide

# required for ARP suppression, requires reboot

# double-wide is required starting 7.0(3)I3(1)

# not required for Nexus 9300-EX/FX or Nexus 9200

interface nve1

  mtu 9216

  no shutdown

  source-interface loopback1

  host-reachability protocol bgp

  member vni 30000

    suppress-arp

    mcast-group 239.239.239.100

  member vni 30001

    suppress-arp

    mcast-group 239.239.239.101

  member vni 30099

    mcast-group 239.239.239.99

  member vni 50000 associate-vrf

interface Ethernet1/1

  switchport mode trunk

  spanning-tree port type edge trunk

  spanning-tree bpduguard enable

interface Ethernet1/47

  description Link for VPC Peer-Link

  switchport mode trunk

  channel-group 1 mode active

interface Ethernet1/48

  description Link for VPC Peer-Link

  switchport mode trunk

  channel-group 1 mode active


interface Ethernet2/1

  description Link to Spine "1"

  no switchport

  mtu 9216

  ip address 10.1.1.1/30

  ip ospf network point-to-point

  ip router ospf UNDERLAY area 0.0.0.0

interface loopback0 # Loopback for Router ID, routing adjacency and peering

  ip address 10.250.250.102/32

  ip router ospf UNDERLAY area 0.0.0.0

  ip pim sparse-mode

interface loopback1 # Loopback for VTEP only

  ip address 10.254.254.102/32

  ip address 10.254.254.1/32 secondary

  ip ospf network point-to-point # will change OSPF interface back to Loopback; required for VPC

  ip router ospf UNDERLAY area 0.0.0.0

  ip pim sparse-mode

router ospf UNDERLAY

  router-id 10.250.250.102

  log-adjacency-changes detail

router bgp 65500

  router-id 10.250.250.102

  address-family ipv4 unicast

  neighbor 10.250.250.101 remote-as 65500

    update-source loopback0

    address-family ipv4 unicast # optional for "show ip bgp summary" support

    address-family l2vpn evpn

      send-community both

  vrf TENANT1

    address-family ipv4 unicast

      advertise l2vpn evpn

      redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET

evpn

  vni 30000 l2

    rd auto

    route-target import auto

    route-target export auto

  vni 30001 l2

    rd auto

    route-target import auto

    route-target export auto

  vni 30099 l2

    rd auto

    route-target import auto

    route-target export auto

Leaf "V2" Configuration:

hostname LeafV2

nv overlay evpn

feature ospf

feature bgp

feature pim

feature interface-vlan

feature vn-segment-vlan-based

feature nv overlay

feature vpc

fabric forwarding anycast-gateway-mac 2020.DEAD.BEEF

ip pim rp-address 10.254.254.254 group-list 239.239.239.0/24

vlan 1,99-101,2500,3000

vlan 99

  name L2onlyHostSegment

  vn-segment 30099

vlan 100

  name L2L3HostSegment

  vn-segment 30000

vlan 101

  name L2L3HostSegment

  vn-segment 30001

vlan 2500

  name FabricBD

  vn-segment 50000

vlan 3000

  name VPCL3Peering

route-map FABRIC-RMAP-REDIST-SUBNET permit 10

  match tag 21921

vrf context TENANT1

  vni 50000

  rd auto

  address-family ipv4 unicast

    route-target both auto

    route-target both auto evpn

  address-family ipv6 unicast

    route-target both auto

    route-target both auto evpn

vpc domain 1

  peer-switch

  peer-keepalive destination 10.2.8.2 source 10.2.8.1 vrf management

  peer-gateway

  ip arp synchronize

interface Vlan100

  no shutdown

  vrf member TENANT1

  ip address 192.168.100.1/24 tag 21921

  fabric forwarding mode anycast-gateway

interface Vlan101

  no shutdown

  vrf member TENANT1

  ip address 192.168.101.1/24 tag 21921

  fabric forwarding mode anycast-gateway

interface Vlan2500

  description FabricBD

  no shutdown

  mtu 9216

  vrf member TENANT1

  ip forward

interface Vlan3000

  description VPC Layer-3 Peering for VXLAN

  no shutdown

  ip address 10.3.1.2/30 # Requires to be individual IP per VPC member

  ip ospf network point-to-point

  ip router ospf UNDERLAY area 0.0.0.0

  ip pim sparse-mode


nve infra-vlans 3000

# required for Nexus 9300-EX/FX or Nexus 9200

interface port-channel1

  description VPC Peer-Link

  switchport mode trunk

  spanning-tree port type network

  lacp suspend-individual

  vpc peer-link


hardware access-list tcam region vacl 0

# example region to free up space for arp-ether region


hardware access-list tcam region arp-ether 256 double-wide

# required for ARP suppression, requires reboot

# double-wide is required starting 7.0(3)I3(1)

# not required for Nexus 9300-EX/FX or Nexus 9200

interface nve1

  mtu 9216

  no shutdown

  source-interface loopback1

  host-reachability protocol bgp

  member vni 30000

    suppress-arp

    mcast-group 239.239.239.100

  member vni 30001

    suppress-arp

    mcast-group 239.239.239.101

  member vni 30099

    mcast-group 239.239.239.99

  member vni 50000 associate-vrf

interface Ethernet1/1

  switchport mode trunk

  spanning-tree port type edge trunk

  spanning-tree bpduguard enable

interface Ethernet1/47

  description Link for VPC Peer-Link

  switchport mode trunk

  channel-group 1 mode active

interface Ethernet1/48

  description Link for VPC Peer-Link

  switchport mode trunk

  channel-group 1 mode active


interface Ethernet2/1

  description Link to Spine "1"

  no switchport

  mtu 9216

  ip address 10.1.1.5/30

  ip ospf network point-to-point

  ip router ospf UNDERLAY area 0.0.0.0

  ip pim sparse-mode

  no shutdown

interface loopback0 # Loopback for Router ID, routing adjacency and peering

  ip address 10.250.250.103/32

  ip router ospf UNDERLAY area 0.0.0.0

  ip pim sparse-mode

interface loopback1 # Loopback for VTEP only

  ip address 10.254.254.103/32

  ip address 10.254.254.1/32 secondary

  ip ospf network point-to-point # will change OSPF interface back to Loopback; required for VPC

  ip router ospf UNDERLAY area 0.0.0.0

  ip pim sparse-mode

router ospf UNDERLAY

  router-id 10.250.250.103

  log-adjacency-changes detail

router bgp 65500

  router-id 10.250.250.103

  address-family ipv4 unicast

  neighbor 10.250.250.101 remote-as 65500

    update-source loopback0

    address-family ipv4 unicast # optional for "show ip bgp summary" support

    address-family l2vpn evpn

      send-community both

  vrf TENANT1

    address-family ipv4 unicast

      advertise l2vpn evpn

      redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET

      maximum-paths ibgp 2

evpn

  vni 30000 l2

    rd auto

    route-target import auto

    route-target export auto

  vni 30001 l2

    rd auto

    route-target import auto

    route-target export auto

  vni 30099 l2

    rd auto

    route-target import auto

    route-target export auto


56 Comments
a.tounkara
Level 1
Level 1

Hi Lukas

Thanks for your response, it make sense, but I have some constraints for access

switches attachment, VPC is not always possible

I have followed STP + IP Fabric recommendations from Cisco to connect an external

device to IP Fabric:

  • IP Fabric switches (leaves) are configured as root bridge.
  • Root guard is enabled.

In short I have some doubts about VPC and IP Fabric treatment:

(Sorry I have  no time to check all my remarks, be indulgent for errors)

  • Mac learning issue: how are Mac learning for a VLAN managed once switches are

configured as VTEP+VPC: a station is connected, both local leaves in the VPC will

learn its Mac address and send advertisement (BGP + Mac-VRF AFI) to RR, or a

special mac address for VPC cluster is used ? If yes, we’ll be in ECMP (yes/no ?)

  • Split brain in VPC (loss of peer-link) may result in an unstable situation inside the Fabric

Once port-channel interface used as peer-link is lost, backup VPC peer should shut all of

its interfaces, but this will take some time, right ? Both peers may be seen as VTEP endpoint

from other leaves acting as VTEP, this means potential duplication or Mac address

flapping issues no ?

  • Orphan port management jointly with IP Fabric is very opaque for me: once a link

becomes simply connected, how will the VPC perform route advertisement inside

IP Fabric if we consider a case where the link is connected to a peer but the active

VTEP is the other peer (for me traffic will go through vpc peer-link, or will anycast

VTEP work in this situation, making VPC peer receiving the traffic to automagically

handle it ?)

  • Finally I have doubts about Broadcast & Mcast traffic management in this situation: for IP Unicast

management on a VTEP, source and destination IP for VXLAN tunneled traffic is simply

the actual VTEP loopback (let’s say loopnack0), but for Broadcast & Mcast traffic, this becomes more

complex: 2 members of VPC peers are potentially seen as PIM DR (Y/N) due to anycast

configuration on IP Multicast, they will both send PIM Join for a mcast group used for

VNI BUM traffic, this means they will both receive Mcast traffic from a source, both will

have correct RPF check result and I don’t know if assert messages are exchanged between

a VPC cluster members, so how is this treated exactly ?.

  • Otherwise, I have lot of problem to understand how some STP optimizations like bridge

insurance and Loop guard will work jointly with VPC & IP Fabric: bridge insurance ensure

a link is err disabled if no BPDU is received anymore, which will be the case between

2 VPC members if a link becomes unidirectional ?

Anyway I ‘ll test and send updates because this subject is interesting

Regards

Lukas Krattiger
Cisco Employee
Cisco Employee

Amadou,

Please feel free to follow the STP root-bridge/root-guard approach, my recommendations are following the best practices where no eventuality can introduce a problem to your existing network. Exactly because of this, the recommendation is a single-point of interconnection and this in a redundant fashion (VPC).

As you have doubts in how VPC works with VXLAN, I would recommend to consult the documentation. We have detailed ones on this and also many Cisco Live presentation explain that part.

  • Mac learning issue: how are Mac learning for a VLAN managed once switches are  configured as VTEP+VPC: a station is connected, both local leaves in the VPC will learn its Mac address and send advertisement (BGP + Mac-VRF AFI) to RR, or a special mac address for VPC cluster is used ? If yes, we’ll be in ECMP (yes/no ?)

VPC in VXLAN uses an Anycast VTEP IP address. A Host MAC address is learned on both VPC members and announced with the Anycast VTEP IP as the next-hop. In addition, Site of Origin (SOO) ensures that a MAC learned from a VPC domain is not re-learned via RR.

  • Split brain in VPC (loss of peer-link) may result in an unstable situation inside the Fabric Once port-channel interface used as peer-link is lost, backup VPC peer should shut all of its interfaces, but this will take some time, right ? Both peers may be seen as VTEP endpoint from other leaves acting as VTEP, this means potential duplication or Mac address flapping issues no ?

VPC uses Peer-Link and Peer Keep-Alive. The unstable situation would only occur in double-failures. This is why Cisco still holds on VPC Peer-Link and Peer Keep-Alive for redundancy unlike other MC-LAG solutions. In case of Peer-Link failure, the operational secondary will bring down the VTEP and as it is Anycast VTEP IP, residing communication follows to the remaining.

  • Orphan port management jointly with IP Fabric is very opaque for me: once a link becomes simply connected, how will the VPC perform route advertisement inside IP Fabric if we consider a case where the link is connected to a peer but the active VTEP is the other peer (for me traffic will go through vpc peer-link, or will anycast VTEP work in this situation, making VPC peer receiving the traffic to automagically handle it ?)

The VPC domain in VXLAN will advertise the Orphan Host with the Anycast VTEP IP.

  • Finally I have doubts about Broadcast & Mcast traffic management in this situation: for IP Unicast management on a VTEP, source and destination IP for VXLAN tunneled traffic is simply the actual VTEP loopback (let’s say loopnack0), but for Broadcast & Mcast traffic, this becomes more complex: 2 members of VPC peers are potentially seen as PIM DR (Y/N) due to anycast configuration on IP Multicast, they will both send PIM Join for a mcast group used for VNI *** traffic, this means they will both receive Mcast traffic from a source, both will have correct RPF check result and I don’t know if assert messages are exchanged between a VPC cluster members, so how is this treated exactly ?.

In a VPC domain, only one of the VPC members is handling the BUM traffic. This is a classic Designated Forwarder approach also used in other Multi-Homing approaches. There will be multiple PIM peerings but not forwarding of BUM traffic across them as per DF election.

  • Otherwise, I have lot of problem to understand how some STP optimizations like bridge insurance and Loop guard will work jointly with VPC & IP Fabric: bridge insurance ensure  a link is err disabled if no BPDU is received anymore, which will be the case between  2 VPC members if a link becomes unidirectional ?

As mentioned initially, you are free to use STP root-bridge/root-guard but my recommendation was to use a single interconnection point and a VPC domain. Instead of using bridge assurance, use a LACP port-channel and it becomes unnecessary to use it. Same would apply if you use UDLD in case of no port-channel. Some of the points you made are slightly unclear as a VPC domain represents a single Bridge-ID in case you use VPC (peer-switch) and thus will look at a single Layer-2 STP switch from that perspective. Regardless, as the VXLAN tunnel is always forwarding, having STP will only take care on Classic Ethernet links and if FWD loops exists, that they don't complement each other (VXLAN + CE) and create a loop.

Kind Regards

-Lukas

a.tounkara
Level 1
Level 1

Hi Lukas

Thanks a lot for these excellent precisions.

Another questions not related: do you know if Qos configuration on incoming traffic arriving on  nx-9300 leaves from customer is mirrored in IP Fabric VXLAN tunnels, I mean if I configure marking and bandwidth reservation on a leave, once a packet is encapsulated in VXLAN header, does the nexus 9300 recopy DSCP values in VXLAN outer header before forwarding inside IP Fabric.

I tried to configure and/or display QOS information indicating nve1 interface and it seems it its impossible.

Otherwise traffic passing through the Fabric and reaching a Leaf arrives on its 40GE interfaces.

It seems necessary to allocated some TCAM space to make these configs working correctly, but this may impact other features yes/no ?

Regards.

Lukas Krattiger
Cisco Employee
Cisco Employee

Hi Amadou,

We can apply QoS policy on the ingress access ports to classify the packets based on qos-groups. With the same QoS policy, when using "set dscp", we will remark the DSCP values on the VXLAN encapsulated packets in egress direction and use the respective queuing policy for scheduling purpose.

Please note, there are some QoS limitation in regards to the Spine to Leaf direction, there "QoS classification is not supported for VXLAN traffic in the network to access direction on the Layer 3 uplink interface" as mentioned in http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/vxlan/configuration/guide/b_Cisco_Nexus_9000_Series_NX-OS_VXLAN_Configuration_Guide_7x/b_Cisco_Nexus_9000_Series_NX-OS_VXLAN_Configuration_Guide_7x_chapter_0100.html

Kind Regards

-Lukas

a.tounkara
Level 1
Level 1

Hi

I saw these details and it is ok.

But actually it seems that QOS queuing scheduling cannot work, once IP Fabric is used,

QOS policies cannot neither be reflected on Overlay level, nor being applied to physical

interfaces, all flows goes to default queue.

In short I tried this:

1. All configs are performed on nx-9300 used as Leaf acting as Gw in symmetric IRB

2. ACL definition to select flows based on source and dest IP

3. Class maps definition to match these access groups

4. Policy map to perform marking and insert each traffic class in a qos-group for

example critical is marked as ef and qos-group1, best effort as dscp=0 & qos-group0, …etc

5. This policy is applied on access ports where my local hosts are branched, in input direction

6. Marking works well, I have checked it

7. Once this policy applied on ingress, I have created a second policy matching default

qos class-maps (c-out-q-default, c-out-q1, c-out-q2, c-out-q3)

8. This policy (queuing type) is used to reserve bandwidth for each traffic classes

9. It is applied on 40GE uplinks (not on nve1 interface as it is impossible)

10. The result is: all traffic goes inside queue 0

11. My though is : how is a qos-group matched with a DSCP and then a queue, Cisco

docs says it is done by default (qos-group0 in c-out-q0, c-out-q1 in queue1, …etc.),

but it seems it does not work.

Regards

De : Lukas Krattiger

Envoyé : lundi 10 octobre 2016 07:00

À : TOUNKARA Amadou IMT/OLN

Objet : Re: - VXLAN/EVPN Configuration Example (N9k)

Cisco Communities <https://communities.cisco.com/>

VXLAN/EVPN Configuration Example (N9k)

new comment by Lukas Krattiger<https://communities.cisco.com/people/lkrattig> - View all comments on this blog post<https://communities.cisco.com/community/technology/datacenter/data-center-networking/blog/2015/05/19/vxlanevpn-configuration-example#comment-26703>

Lukas Krattiger
Cisco Employee
Cisco Employee

Hi Amadou,

I would suggest you check with TAC as the mapping from Access-Port classified traffic (ingress) on ingress VTEP is supported. Please make sure you use DSCP as if there is no trunk on egress, COS values disappear fast

Thanks again for all your interest and Kind Regards

-Lukas

s24sean11
Level 1
Level 1

Thank you, this is a great write up.

Can you point to any documentation with having Nexus 9300 and Nexus 1000v VxLAN topology using MP-BGP EVPN.   From what I have seen with the 1000v guides for BGP control plane is seems to specifically call out it is only for talking with other VSM's.

Andras Dosztal
Level 3
Level 3

Hi Lukas,

Have you ever tried to set up the same using NX-OSv 9000? The guide says "PIM or Mcast Group support" is not available but I don't know if that applies to the control plane too or just to actual packet forwarding.

Thanks,

Andras

Lukas Krattiger
Cisco Employee
Cisco Employee

In VXLAN BGP EVPN, the control-plane is always BGP EVPN.

Ingress Replication would be assisted by control-plane but still uses data-plane for packet replication. Multicast would be a data-plane only and that doesn't require assistance from the control-plane.

So I'm not sure what your question exactly asks but NX-OSv 9000 would use ingress replication only.

Thanks

-Lukas

m1xed0s
Spotlight
Spotlight

Thanks for the configuration example.

I have a question regarding the multi-tenancy. Assuming I have two spine (Nexus 9504 )with two leaf (Nexus 9300-EX). The two leafs are running vpc and south bound to a UCS FI pair.

Can I do the multiple tenants with duplicated VLANs? Say VLAN 10/192.168.10.0/24/VNI30001 is used by tenant A and also VLAN 10/192.168.10.0/24/VNI30002 is used by tenant B. Workloads for both tenants are running in the same UCS5108 chassis.

I assume this won't be working. I can use the same subnet 192.168.10.0/24 for two tenants workloads in the same UCS chassis but need unique VLAN IDs, right?

Lukas Krattiger
Cisco Employee
Cisco Employee

Shuai,

In UCS, the VLAN significance is global, which means that a VLAN ID is a unique identifier. UCS has no significance in regards to IP Subnet or VRF (tenant). You have to ensure that VLAN IDs are none overlapping withing the UCS domain so they can be clearly mapped on the external Switch side.

If you can abstract the VLAN ID within the UCS domain in a way that your two tenants are represented with different VLAN IDs, then you can make it happen.

Thanks

-Lukas

m1xed0s
Spotlight
Spotlight

Thanks for the replay. Thats what I thought. So with UCS system doing multi-tenancy, I will be again limited to 4096 VLAN IDs for multiple tenants even north bound has VXLAN configured.

Also on a single leaf switch or a leaf VPC switch pair, I can not do VLAN 10 for tenant A and B the same time, right? I can only duplicate the corresponding SVI addresses and separate them with tenant VRFs on a single leaf switch or leaf VPC pair; OR use different two leaf switches for VLAN 10 for tenant A with unique VNI and VLAN 10 for tenant B with unique VNI and configure VRF.

Is there a best practice or design guide regarding layer 2 multi-tenancy in datacenter?

Lukas Krattiger
Cisco Employee
Cisco Employee
m1xed0s
Spotlight
Spotlight

Perfect!! This is exactly what I am looking for! Thanks!

ulasinski
Level 1
Level 1

Hi Lukas,

I'm not advanced.

In the LEAF configuration, you have 2 loopback interfaces defined.

You use Loopback 0 as a source of OSPF and BGP. Loopback 1 as the source of VTEP.

Why do you separate them?

If I would like to create a completely separated VRF without leaks with the associated 2 SVI (anycast-gateway mode) belonging to it, how to do it?

1. Create another VRF 2

2. Create 3 vlan's (in that + one for transit)

3. Create a second NVE 2

     (should I use the same loopback interface as for nve 1?)

4. Advertise VRF by BGP

Do I think correctly?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: