cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
435
Views
0
Helpful
4
Replies

Service profile - 2012r2 HV Cluster

goapps.us
Level 1
Level 1

Hello everyone, 

I wanted to get some feedback about a service profile template we are trying to put in production for our new HV cluster. Note that we are using SMB storage managed by W2012R2 SOFS server. 

NIC
General Description
RSS
VMQ
RoCE/RDMA
General Note
Failover
MGT
Used for host management 
No
No
No
Generic Windows adapter policy 
FI A + Failover
HeartBeat
Heartbeat
No
No
No
Generic Windows adapter policy 
FI B + Failover
LiveMigration
LiveMigration
Yes
No
Yes
Used for VM live migrations
FI A + Failover
SMB-1
Used for SMB of the hosts
Yes
No
No
Custom Adapter policy 
FI A - no failover 
SMB-2
Used for SMB of the hosts
Yes
No
No
Custom Adapter policy 
FI B - no failover 
SMB-3
Used to present share to VM's hosted on the SOFS cluster. Limited purpose 
No
Yes
No
Generic Windows adapter policy + VMQ adapter connection policy 
FI A - no failover 
SMB-4
Used to present share to VM's hosted on the SOFS cluster. Limited purpose 
No
Yes
No
Generic Windows adapter policy + VMQ adapter connection policy 
FI B - no failover 
VM-A
Member 1 of Tenant vSwitch 
No
Yes
No
Generic Windows adapter policy + VMQ adapter connection policy 
FI A - no failover 
VM-B
Member 2 of Tenant vSwitch
No
Yes
No
Generic Windows adapter policy + VMQ adapter connection policy 
FI B - no failover 

Any thoughts? Should I collapse VM-A and VM-B and let UCS manage the LB and HA? 

Thanks 

1 Accepted Solution

Accepted Solutions

Hi

There is generic issue, irrelevant if hardware failover or software HA. If you communicate within a vlan (l2 network) veth of the source in Fabric A, and destination veth in Fabric B, you exit UCS domain to Northbound switch, and reenter it again. Even if anything is initially is on Fabric A, due to failure, part could failover to fabric B......

View solution in original post

4 Replies 4

Walter Dey
VIP Alumni
VIP Alumni

Generally with W2012 on, where MSFT bundled NIC teaming with the OS, it is recommended to let the OS and not the hardware (UCS failover) do HA and LB. Therefore I would completely avoid any hardware failover; just create 2 veth for fabric A and B.

Hello Walter, thank you for your reply. I always appreciate your input. 

I agree with you, but in our tests we are getting some odd results and I have two vendors pointing fingers at each other. Here is are my thoughts on the subject:  

Before the packets hit the UCS fabric, they have to travel through the MS vSwitch/stack and go through the link aggregation and load balancing algorithms. I would argue that the Cisco gear is better equipped to handle these functions VS the vSwitch on the HV hosts.  Also from a support perspective, this approach greatly reduce the chances of finger pointing...

The two drawbacks I see are: 

1) The host OS would never detect a link failure.

2) Assuming 2 veth in my vSwitch, each veth can have a VMQ policy of 128. So a theorical maximum 256 VMQ per service profile. If the vswitch only has one veth, I limit my VMQ queues to 128 per service profile

Thanks 

Hi

There is generic issue, irrelevant if hardware failover or software HA. If you communicate within a vlan (l2 network) veth of the source in Fabric A, and destination veth in Fabric B, you exit UCS domain to Northbound switch, and reenter it again. Even if anything is initially is on Fabric A, due to failure, part could failover to fabric B......

Thank you for that nugget. I will keep my service profile template as is. 

Best regards and thank you for your work on these forums. 

Review Cisco Networking products for a $25 gift card