cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
513
Views
0
Helpful
6
Replies

UCS Networking with Windows Servers (bare bones) Connected to multiple VLANS

cadillacjack
Level 1
Level 1

Not sure why I am having such a hard time with this... I am setting up my first UCS.  I have a couple W2K8 server installed on a couple of blades.  Each of these servers will need to connect to different networks/VLANs.  Normally I just connect each NIC in a regular server to an access port that is assigned the VLAN I need that NIC to be in, assign an IP and off I go.  Worst case I have a NIC capable of dot1q and I setup a trunk on my switch and configure the NIC with trunking and some other stuff and we are good to go...  But with these blades I am having a hard time.  I have 5108 that has 2 Server Links connected to FI A and 2 to FI B.  Then from each FI I have a 10Gb fiber Uplink into a stacked set of 2960X switches with the needed vlans created... The trunk seems to come up... I have the VLANs configure with SVI and proper IPs.  When I try to ping the hosts  I can see the ARP request from the switch and the host replies but its like the switch is not getting the ARP reply... but the traffic from the switch makes it down to the host and it tries to reply but it never makes it back...  If I mark a certain VLAN as native and change the ID to 1 I can ping but only on that VLAN....  I have looked and looked at all the walk throughs but I feel like I don't understand how to setup the networking on the FI's for a bare bones server with 3 NICs that all need to access 3 different VLANs on the upstream switch...  I would like to have VLANs 1 and 2 go through FI A and failover to B and VLAN 3 to use FI B and failover to A... What the heck am I missing?  Do I need to unstack the switxhes?

 

Thanks

6 Replies 6

Walter Dey
VIP Alumni
VIP Alumni

Each Vlan should be configured on Fabric A and B; multpathing would then do loadbalancing and failover.

The only question: is your OS handling a trunk of vlan's, or do you have one vnic per vlan ?

The server has NICs assigned to them.  I have tried one VLAN per NIC and 1 NIC with multiple VLANs.  THe only way I can make traffic consistently forward traffic is to make 1 vNIC with a single VLAN but that has to be marked as Native.  As a matter of fact I have 2 vlans on the upstream switch with SVIs and I can see when I ping from the switch to the server the ARP request and I see the respond but the switch does get the arp response...  its like the traffic is unidirectional...

 

Let me ask this question:

What is the correct to set up the FIs with multiple bare bones Windows servers each with at least 3 NICs(vNICS)...?

Do I create the VLANs globally and use the commom setting for the fabrics so the VLANs exist on both?

Then do I need a vNIC template for each type of NIC( 1 per VLAN)?

In the vNIC template I set each one to use a specific FI and check the failover to get FF?

 

The uplinks appear to be understanding some the trunk traffic just don't know where the return traffic is getting stopped.

 

I attached a drawing of the lab.

 

Thanks for the help.

create eg 8 vnics (vnic0-7), 4 are linked to fabric A, 4 to fabric B

eg. even ones to A, odd to B.

vnic0 and 1 would eg. carry vlan 10

vnic 2 and 3 vlan 20,......

I would not set hardware failover flag, but let the OS multipathing driver do this work.

Above is all done in within the service profile, no configuration on the FI necessary (assuming you have created global vlans, which means vlans are identical on fabric a and b).

 

Awesome thanks!

 

So when you say multipathing driver are you referring to the cisco NIC teaming driver?

 

I really appreciate your help

I did figure out how to use the FF HW stuff... I was as expected something I was misunderstanding.  In the vNIC template for each VLAN I checked the box and never checked the NATIVE radio button.  Each vNIC basically had 1 VLAN bound to it... for some reason I was under the impression that setting the Native VLAN flag in the template would somehow mark that VLAN as native globally but I was wrong.  After going to each vNIC in my template and marking the VLAN bound to the NIC as Native everything started to work... That immediately helped understand EHM better.

 

Is your reasoning for not using hardware failover because of performance and packet loss?

 

I noticed it wasn't too bad but could be bad enough during heavy traffic in our environment to cause customers to see issues..

Hs. Failover was Mainly Used when the OS didn't Support multipathing. It. also loads the FI CPU with GARP Processing.

Review Cisco Networking products for a $25 gift card