Can I connect Nexus 5596 mgmt port to Nexus 2248TP-E swith for mamangement purpose.
Management network details:
I have seperate management network consisting of two Nexus 5k and 2248.
Each server rack has one 2248 switch where server exclusive OOB ports will be connected.These 2248's from each server rack is then uplinked to Nexus 5k. Server administrators will be connected to management Nexus 5k and will manage the servers.
Data Network details.
Each server rack ( same racks as mentioned above in addition to 2248 mgmt FEX ) will have 2232 switches and data NIC's of servers will be connected to TOR 2232 switches. Each of these 2232 switches from different racks are uplinked to Nexus 5596 switches which further connects to N7K core layer.
My requirement : I want to manage these data 5596 switches by connecting thier MGMT ports to 2248 MGMT Fex. Network admin will be connected to management Nexus 5k and will login into data 5596 switches for configuration and troubleshooting purpose.
Is it possible?
Thanks & Rgds,
I have used 3750s for OOB, but I don't see why not. Basically the 2 5500 and the 2 2248 switches are being used for management. Can't you directly connect the out of band mgmt port from all other switches to the 2 5500 instead of going through the 2Ks? This way, there are few devices to fail and manage.
Thanks Reza for quick response.
Yes I can directly connect data 5596 switches to Mgmt 5596 switches without going via 2248's. But I have large nos of data 5596 switches. If I plan to coonect directly, I have to lay extra cabling all the way from each serverfarm to the serverfarm where Mgmt 5596 switches are placed, extra hardware I have to procure. Severfarms rooms are bit far away from each other.
Anyways, I am using 2248 in each server rack for server OOB ports, so I need to just uplink mgmt port 0 of data 5596 to 2248 switch in the adjacent server rack. And all 2248's are already uplinked to mgmt 5596. This will save me from doing extra cabling and extra hardware.
As per your knowledge, it is possible to connect mgmt port 0 of 5596 to 2248 right?
But I have a doubt, 2K does not allow any switch to be connected to it. Is Mgmt port 0 of 5596 is routed port ?
Yes, you can connect the N5K mgmt interface into the FEX. I have a customer doing this right now.
My opinion is the N5K for the management network should not connect its mgmt interface into its own FEX. This defeat the purpuse of OOB network.
Thanks Jerry..thanks a lot...this will help me save my face in front of the customer..as I had not provisioed for the extra SFPs and cabling..thanks..
How many FabricPath multi-destination trees are supported in N7k 6.1 NX-OS and N5k 5.1(3)N2(1).
If suppose more than 3 trees are supported and I want to configure root priority manually, do I need to define root priority for 3 switches or I can work with by confuguring root priority for 2 switches and leave other switches with default value of 64.
Also, if I configure IS-IS authentication globally, is it mandatory to configure authentication at the interface level also?
It apears to be 2 as of N7k 6.1
see table-3 in this link:
Also, for ISIS authentication you need the interface config too:
Mukund is talking about Fabricpath ISIS authentication.
From the configuration guide, it shows that you can configured it in both global configuration and interface configuration mode. I have not tested this before. I actually have a FP set up in the lab and I will test this and provide an answer in couple days.
Thanks Jerry for your co-operation.
I have one more query : I have a fabric path domain with two spine switches connecting to another set of 5596 switches which are not in FP domain. No vPC+ between spine devices and 5596 devices.
Spine layer is a default gateway for servers in FP domain and proivdes routing to rest of the network via the 5596 switches. I will be configuring HSRP at spine layer. Can I configure GLBP?
Say I have VLAN 10 has FB VLAN and VLAN 20 is the vlan connecting spine layer and 5596 switches. VLAN 20 is CE VLAN,,for routing purposes do I need to provision an extra link between Spine devices which carries only CE VLAN 20 and also provide a backup link in case of uplink failure of active HSRP spine.? I am using static route for routing purposes.
Suppose the edge devices in a FP domain are connected to each other apart from connecting to spine devices.
My spine 7k-1 is defined as the root for tree1 and spine 7k-2 as root for tree 2. Host A is connected to edge 5k-1 and host B is connected to edge 5k-4. Both edges are connected to each other.
In this case, unicast traffic from host A to host B will still got to tree1 and then go to 5k-4 eventually reaching host B inspite of edge 5k-1 and 5k-2 connected to each other directly?
I am extremely sorry for not asking all the doubts together..
1. Is this in your diagram? I am a little confused with your diagram. You can use GLBP or HSRP. By default, HSRP is active-standby. If you want active-active HSRP, you will need vPC+ (until Cisco comes up with anycast HSRP). GLBP will use all nodes automatically with different VMAC, not special configuration needed. I am assuming your SVIs (VLAN10 and VLAN20) are both on the spine N5K? If yes, then you don't need to do any thing special to GLBP and HSRP, they should be able to route in between. The cross link, in your case, I am thinking you need a L3 cross link, will give you redundancy let's say your VLAN10 is down on Spine-A and VLAN20 is down on Spine-B.
2. Unicast doesn't use the multidistination tree. It will hash and choose its path accordingly. multidistination tree is used for unknown unicast, broadcast and multicast.
Sorry for missing the diagram for query no 1.
Attaching the diagram.
Both SVI's for FP VLAN 10 and CE VLAN 10 are on spine N7K. Both are running HSRP. VLAN 20 is the vlan used for ruting between FP domain and rest of network. I have added a default route in both N7K pointing towards HSRP IP of VLAN 20 ie 10.10.10.4 of uplinked L3 switch. And reverse route for VLAN 10 on uplink L3 switch pointing towards HSRP VIP 10.10.10.1. Pls refer the diagram for more clarity.
My query is do I need to have a seperate link betweeen spine N7K for carrying only CE VLAN 20 STP and routed traffic. Also providing a backup link in case of uplink failure of active HSRP spine. Whether this link should be pure routed interface or L3 over SVI can do.
Thansk & Rgds,
Okay, this is a different design question. I will never recommend anyone to use L2 VLAN to route unless this is for appliance traffic (FW, etc.). In your situation, the best way is to use L3 ECMP. Meaning each link will have /30 IP addresses, instead of relying on L2 spanning-tree (or /24 broadcast domain), routing protocol will take care the job for you (load balancing, etc.). I know you mentioned you are using static route, it can be done also. Anyway, I always put a L3 links between both N7Ks to protect double failure scenario.
Thanks Jerry...thanks a lot....
But is it necessary that spine devices with L3 capability be the root for multidestination trees? In my FP domain design, N7K have F2 modules whereas 5596 are simply L2 devices. I want to route between the FP domain and other networks via N7K and use 5596 devices as root for multidestination trees as my multicast sources and receivers will be connected to 5596 devices in FP domain.
Thanks & Rgds,
Not necessary, you can put your L3 gateways on spines or leafs depending on what you are looking for. If all your FP devices are N7K, you can actually dedicated a pair of leafs to do L3 routing.
If you have N5K as your leafs, you might not want to put use this this L3 routing because of supported L3 features. Let's say for whatever reason, you are connecting a pair of load balancer to your FP and you need to use PBR instead of SNAT. If this is the case, you can't use N5K as your L3 gateways because it doesn't support PBR today.
You need to look at the big picture and put everything together and choose your design accordingly.
Thanks for all your previous queries.
I have another query regarding multicast in Nexus 5K.
I have two Nexus 5596 switches connected to each other and have multiple VLANs configured on both switches
say VLAN 10,20,30 & 40
I need to run IGMP snooping which is by default enabled...Nexus 5596 is purely running in L2 mode and I want to configure IGMP snooping querier.
In NX-OS, IGMP snooping querier must be configured under vlan mode and not on L3 interface ( In Cisco IOS Software, an IGMP Snooping Querier is configured under the layer-3 interface.)
Few queries regarding the same.
1) If an IGMP snooping querier is configured on both switches, only one of them will be active because an IGMP snooping querier shuts down if a query is seen in the traffic.-- what does it mean about shutting down IGMP snooping querier. Does it disable L2 vlan on other switch. What does it exactly do.
ip igmp snooping-querier 192.168.10.1
ip igmp snooping-querier 192.168.20.1
ip igmp snooping-querier 192.168.30.1
ip igmp snooping-querier 192.168.40.1
I want to enable igmp snooping querier on both switches so that in case of any switch failure,other will start functioning as querier.
2) Do I need to configure an L3 SVI for all VLAN's 10 with their corresponding IP address as shown below
int vlan 10
ip address 192.168.10.1/24
Thanks & Rgds,