06-08-2023 06:46 AM
I had basic configuration done with a lot of help from this discussion.
https://community.cisco.com/t5/switching/nexus-93180-vpc-and-vmware-recommendations/td-p/4850262
I'm installing new pair of vPC 93180s and new VMWare hosts based on Epyc. Can't wait.
Unfortunately current network design is messed up. Layer 3 is done by Core 1 switch which is a stack of 3750s with 1gbps fiber links.
There are plans to upgrade so I can link directly to the new Core 1 Catalyst 9300 stack over faster links but I feel it's not optima.
So would like to move some of the Layer 3 routing to Nexus. Not take over from Core 1. Just specific local routes.
Attached is basic layout and latest config. I'm looking for a sample of code required to enable L3 and do some static routing.
Let's say Core 1 has 10.40.16.1 and 10.40.5.1. I'm thinking to L3 those VLans at 10.40.16.2 and 10.40.5.2 to keep traffic locally on Nexus. I think it would greatly improve performance.
Thx
Solved! Go to Solution.
06-08-2023 02:11 PM
If I give VLan 50 - 10.40.5.2 and VLan 51 - 10.40.16.2 on Nexus. It's .1 on Catalyst. On a VM server I tell your gateway is 10.40.5.2 not .1. Wouldn’t that keep traffic local?
So, in this case, since the VM gateway is the nexus (.2), there is really no need for .1 on the Catalyst. And yes, you would route locally using Nexus. Now in order for the same subnet to reach the Catalyst and the world, you would need a transit vlan connecting the Nexus to the Catalyst with some static routes on the Nexus as well as on the Catalyst, if not VLan 50 - subnet 10.40.5.0/24 reaches the nexus and has nowhere to go. So, if you are going to go this route, you need to change the connection between the nexus and the Catalyst from layer-2 to layer-3, which means you need a new subnet/vlan that is used as transit vlan/subnet. The other option would be to make the connection between the 2 devices a routed port with a /30 subnet and use that as a transit subnet. So, 2 options:
1-A transit VLAN/subnet using an SVI on each switch e.g., VLAN 100 subnet 10.10.100.0/29
2-A routed port on each device with an IP using a /30 with no VLANs and no SVIs. 10.10.100.0/30
HTH
06-08-2023 07:09 AM
Hi,
If you change the design and move some of the vlans to the Nexus, that will create confusion to keep track of what is being routed on the Nexus and what is being routed on the 3750s. In addition, once you move a vlan to a Nexus, now you have to deploy an L3 link between the Nexus and the 3750s, if not the traffic will not route correctly. So, let's say the default gateway for the servers/VM is the Nexus, once the traffic reaches the Nexus, you would have to create an L3 link (using a transit vlan) and route from Nexus to the 3750s. My recommendation is to keep the L3 on the 3750s and L2 on the Nexus for now but add additional 1Gig links (using a Portchannel) until you are ready to retire the 3750s and move the vlan gateways to the Nexus switches.
HTH
06-08-2023 10:25 AM
Hey, great response. Really made me think.
If I give VLan 50 - 10.40.5.2 and VLan 51 - 10.40.16.2 on Nexus. It's .1 on Catalyst. On a VM server I tell your gateway is 10.40.5.2 not .1. Wouldn’t that keep traffic local? This would be done only on machines connected directly to Nexus. And not all VLans would be routable. Would that create routing loops so some traffic wouldn’t be able to get out? No other devices would be configured with .2 gateways. The routable and non-routable VLans would go out on L2 trunk to the world. Unfortunately, we have lots of legacy IPs and 24x7 operation which makes your recommendation, which makes sense of course, extremely hard to implement. Plus, push back from other admins as always of course. I can understand some guys can be annoyed with my questions, but I only get to do this every 10 years or so. And then just maintain it.
06-08-2023 02:11 PM
If I give VLan 50 - 10.40.5.2 and VLan 51 - 10.40.16.2 on Nexus. It's .1 on Catalyst. On a VM server I tell your gateway is 10.40.5.2 not .1. Wouldn’t that keep traffic local?
So, in this case, since the VM gateway is the nexus (.2), there is really no need for .1 on the Catalyst. And yes, you would route locally using Nexus. Now in order for the same subnet to reach the Catalyst and the world, you would need a transit vlan connecting the Nexus to the Catalyst with some static routes on the Nexus as well as on the Catalyst, if not VLan 50 - subnet 10.40.5.0/24 reaches the nexus and has nowhere to go. So, if you are going to go this route, you need to change the connection between the nexus and the Catalyst from layer-2 to layer-3, which means you need a new subnet/vlan that is used as transit vlan/subnet. The other option would be to make the connection between the 2 devices a routed port with a /30 subnet and use that as a transit subnet. So, 2 options:
1-A transit VLAN/subnet using an SVI on each switch e.g., VLAN 100 subnet 10.10.100.0/29
2-A routed port on each device with an IP using a /30 with no VLANs and no SVIs. 10.10.100.0/30
HTH
06-09-2023 05:05 AM
Great. Two things. One, this is way beyond my capability. Again, this cluster is mainly for backend VMWare, storage and Servers communications. The only Nexus' in the place. No time for failures, testing or complex solutions. Two, there is no way I would have time to switch over as the place runs almost 24x7x365. I know. Crazy. My best option would be to push everybody to re-address servers to one or two subnets based on function. We do have servers that talk to servers on many subnets. Well I tried. Thank You for all the great info. Helps me make up my mind. KISS solution it is then. Cheers
06-09-2023 06:31 AM
Absolutely, Changes like these require a lot of planning and testing before deployment. It is particularly difficult in a 24x7 environment where you need to have services up and running without any interruptions. I do think a layer-2 connectivity between the Nexus and the Catalyst is the easiest solution for now. Good luck with your project!
HTH
06-09-2023 06:46 AM
layer-2 or layer-3 ? Hmm
I think there is restriction for Vmotion that it need Layer-2
for other post you share before I drink cup of tea and think, let summary what I think about
different between orphan and normal access port/trunk port
orphan is carry vPC VLAN and connect to only one NSK SW, So there is some limitation of using orphan instead using non-vPC VLAN and config access port , but it depend on what you connect to and if you want routing or bridging the traffic between orphan and other Server.
hope this help you in your design
thanks
MHM
06-09-2023 08:14 AM
What I meant by orphan is a port defined the same on both switches ie:
interface Ethernet1/20
description Datastores-10.48.10.noroute
switchport
switchport access vlan 100
I assume single NIC server connected to switch 1 on port 20 will communicate with another single NIC server on switch 2 on port 20 over Peer-link not through Trunk to the World = outside switch.
In my Peer-link config I have this native VLan line, not sure if I need it there:
interface port-channel123
description **** vPC peer port channel ***
switchport
switchport mode trunk
switchport trunk native vlan 999
spanning-tree port type network
vpc peer-link
interface Ethernet1/2
description **** vPC peer link ***
switchport
switchport mode trunk
switchport trunk native vlan 999
channel-group 123 mode active
no shutdown
Since no allowed VLans are specified I assume it's all are allowed.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide