cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements

Cisco Community Designated VIP Class of 2020

4061
Views
0
Helpful
6
Replies
Highlighted
Beginner

HI ALL did any try NLB on VM's using HYPERV on UCS blades

I try to setup CAS server cluster using the unicast NLB on the VM's on diffrent blades on the UCS, it works for a while and then it drop the packets.

I heared that this senarion of unicast is not support in the UCS when it used END-host mode in the Fabric interconnet.. ? did any one try that before .

Everyone's tags (7)
2 ACCEPTED SOLUTIONS

Accepted Solutions

HI ALL did any try NLB on VM's using HYPERV on UCS blades

Hi Dan,

I'm working on the same issue and converted the UCS manager to Switch-Mode and had the same result:

- NLB virtual IP: pinging

- Server 1/ windows on blade 1: pinging

- Server 2/ windows on blade 2: not pinging

One ethernet uplink only to the upstream LAN switch( from Fabric-A - this is a recreate in a LAB environment). both two vNICs for the blade servers connected to the Fabric-A.

load balanicng / HA are not working ,,, if server 1 went down all NLB goes down ( no failover to the servers 2 ).

Do we need to do any other configuration on the FI or the upstream switches ?

Regards,

Mohammad

View solution in original post

Enthusiast

HI ALL did any try NLB on VM's using HYPERV on UCS blades

Q:if I use the muticast mode is there any thing needs to be done on the FBI62020 or on the UPStream LAN switch. ??

A:A note I found on setting up UCS for mulitcast NLBL:

Microsoft NLB can be deployed in 3 modes:

    Unicast

    Multicast

    Multicast IGMP

For UCS B-Series deployments we have seen that both Multicast and Multicast IGMP mode work.

Multicast IGMP mode appears to be the most reliable deployment mode. 

This requires the follow settings:

    All Microsoft NLB nodes be set to "Multicast IGMP".  Important!  Verify ths by logging into EACH node independently.  Do not rely on the NLB MMC snap-in.

    An IGMP querier must be present on the NLB VLAN.  If PIM is enabled on the VLAN that is your querier.  UCS cannot function as IGMP querier.  If a functioning querier is not present, NLB in IGMP mode will not work.

    You must have a static ARP entry on the upstream swtiches that points the NLB unicast IP address to the NLB multicast MAC address.  This will need configured, of course, on the VLAN of the NLB VIP. The key is that the routing interface for the NLB VLAN must use this ARP entry since an ARP reply  of a unicast IP cannot contain a multicast mac address. (Violation of RFC 1812)  Hosts on the NLB VLAN must also use the static entry.  You may need multiple ARP entries.  IOS can use a ARP "alias" function. (Google it.)

How Microsoft NLB works. - Mac addresses truncated for brevity sake.

MS NLB TOPOLOGY

    NLB VLAN 10 = IP subnet 10.1.1.0 /24

    NLB VIP = 10.1.1.10

    Static Arp Entry on upstream switch points IP 10.1.1.10 to MAC 01

NLB VIP (MAC 01, IP 10.1.1.10)

NODE-A  (MAC AA, IP:10.1.1.88)    

NODE-B  (MAC BB, IP:10.1.1.99)

    Using IGMP snooping and the VLAN querier the snooping table is populated with the NLB mac address and groups pointing to the correct L2 ports.

    MS NLB nodes will send IGMP query replies.

    This snooping table could take 30-60 seconds to populate.

    Host on VLAN 200 (10.200.1.35) sends traffic to NLB VIP (10.1.1.10)

    This is routed of course to the VLAN 10 interface which will use the static ARP entry to resolve to the 01 MAC address of the NLB VIP.

    Since this is a multicast frame destination it will be forward per the IGMP snooping table.

    The frame will arrive at ALL NLB nodes. (NODE-A & NODE-B)

    The NLB nodes will use its load balancing algorithm to determine which node will handle the TCP session.

    Only one NLB node will reply to this host with TCP ACK to begin the session.

NOTES

    This will work in a VMware enviroment with N1k, standard vSwtich and vDS. In the case where IGMP snooping is not enabled the frame destined for the NLB VIP MAC will be flooded.

    NLB can only work with TCP based services.

    As stated previously  mapping a unicast IP address to a multicast mac address is a violation implied by RFC 1812.

TROUBLESHOOTING

    Verify your querier is actually working.Just specifying it does not mean it is actually working. 

    Use wireshark to verify IGMP Queries are being recieved by the NLB nodes.

    Insure that ARP reply is working as expected.  Again Wireshark is your friend.

    Look at the IGMP snooping tables. Validate the L2 ports are showing up as expected.

    CSCtx27555 [Bug-Preview for CSCtx27555] Unknown multicast with destination outside 01:xx MAC range: are dropped. (6200 FI's fixed in 2.0.2m)

     IGMP Mode not affected.

CSCtx27555    Unknown multicast with destination outside 01:xx MAC range: are dropped.

http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCtx27555

resolved in     2.0(2m) 

Workaround: Changing the NLB operating mode from "Multicast" to "Multicast IGMP", which changes the NLB VIP MAC to 0100.5exx.xxx range, allows forwarding to occur as expected.

Q: and if I switch to Switch mode , that means all the profile and the settings on the servers will be dead and I need to recreate them. ???

A:Cisco Unified Computing System Ethernet Switching Modes

http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns944/whitepaper_c11-701962.html

- There is no impact to the configuration you have done to the service profiles.  they will continue to function as expected.  Switch mode has the FI behave more as a classic switch.  Most noticable is that Spanning tree will be enabled and if you have multiple uplinks from teh FI, spanning tree will start blocking redundant paths.

You will need to review your toplogy and what impact spanning tree will have.  Typically we have the upstream switch port defined as 'edge trunk', you will want to remove this line.

For preproduction and lab environment, PDI can assist qualified partners with planning, design, and implementation.  Considering reviewing the PDI site and open a case if you need more detailed assistance.

View solution in original post

6 REPLIES 6
Enthusiast

Re: HI ALL did any try NLB on VM's using HYPERV on UCS blades

MS Unicast NLB is not supported on UCS in EHM.  The issue is that MS NLB uses MAC flooding which is not supported in EHM.  You will need to use Multicast mode or configure UCS to switching mode.

Thank You,

Dan Laden

Need assistance with planning, designing, or implementation?

http://www.cisco.com/go/pdihelpdesk

Beginner

Re: HI ALL did any try NLB on VM's using HYPERV on UCS blades

Thank Laden

I was looking for diffrent answer, actually I found that in the google preivously , but I was looking for any one who try the mentioned senario. since I try it it was working for 12 hours before it drops the packets.

if I use the muticast mode is there any thing needs to be done on the FBI62020 or on the UPStream LAN switch. ??

and if I switch to Switch mode , that means all the profile and the settings on the servers will be dead and I need to recreate them. ???

Enthusiast

HI ALL did any try NLB on VM's using HYPERV on UCS blades

Q:if I use the muticast mode is there any thing needs to be done on the FBI62020 or on the UPStream LAN switch. ??

A:A note I found on setting up UCS for mulitcast NLBL:

Microsoft NLB can be deployed in 3 modes:

    Unicast

    Multicast

    Multicast IGMP

For UCS B-Series deployments we have seen that both Multicast and Multicast IGMP mode work.

Multicast IGMP mode appears to be the most reliable deployment mode. 

This requires the follow settings:

    All Microsoft NLB nodes be set to "Multicast IGMP".  Important!  Verify ths by logging into EACH node independently.  Do not rely on the NLB MMC snap-in.

    An IGMP querier must be present on the NLB VLAN.  If PIM is enabled on the VLAN that is your querier.  UCS cannot function as IGMP querier.  If a functioning querier is not present, NLB in IGMP mode will not work.

    You must have a static ARP entry on the upstream swtiches that points the NLB unicast IP address to the NLB multicast MAC address.  This will need configured, of course, on the VLAN of the NLB VIP. The key is that the routing interface for the NLB VLAN must use this ARP entry since an ARP reply  of a unicast IP cannot contain a multicast mac address. (Violation of RFC 1812)  Hosts on the NLB VLAN must also use the static entry.  You may need multiple ARP entries.  IOS can use a ARP "alias" function. (Google it.)

How Microsoft NLB works. - Mac addresses truncated for brevity sake.

MS NLB TOPOLOGY

    NLB VLAN 10 = IP subnet 10.1.1.0 /24

    NLB VIP = 10.1.1.10

    Static Arp Entry on upstream switch points IP 10.1.1.10 to MAC 01

NLB VIP (MAC 01, IP 10.1.1.10)

NODE-A  (MAC AA, IP:10.1.1.88)    

NODE-B  (MAC BB, IP:10.1.1.99)

    Using IGMP snooping and the VLAN querier the snooping table is populated with the NLB mac address and groups pointing to the correct L2 ports.

    MS NLB nodes will send IGMP query replies.

    This snooping table could take 30-60 seconds to populate.

    Host on VLAN 200 (10.200.1.35) sends traffic to NLB VIP (10.1.1.10)

    This is routed of course to the VLAN 10 interface which will use the static ARP entry to resolve to the 01 MAC address of the NLB VIP.

    Since this is a multicast frame destination it will be forward per the IGMP snooping table.

    The frame will arrive at ALL NLB nodes. (NODE-A & NODE-B)

    The NLB nodes will use its load balancing algorithm to determine which node will handle the TCP session.

    Only one NLB node will reply to this host with TCP ACK to begin the session.

NOTES

    This will work in a VMware enviroment with N1k, standard vSwtich and vDS. In the case where IGMP snooping is not enabled the frame destined for the NLB VIP MAC will be flooded.

    NLB can only work with TCP based services.

    As stated previously  mapping a unicast IP address to a multicast mac address is a violation implied by RFC 1812.

TROUBLESHOOTING

    Verify your querier is actually working.Just specifying it does not mean it is actually working. 

    Use wireshark to verify IGMP Queries are being recieved by the NLB nodes.

    Insure that ARP reply is working as expected.  Again Wireshark is your friend.

    Look at the IGMP snooping tables. Validate the L2 ports are showing up as expected.

    CSCtx27555 [Bug-Preview for CSCtx27555] Unknown multicast with destination outside 01:xx MAC range: are dropped. (6200 FI's fixed in 2.0.2m)

     IGMP Mode not affected.

CSCtx27555    Unknown multicast with destination outside 01:xx MAC range: are dropped.

http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCtx27555

resolved in     2.0(2m) 

Workaround: Changing the NLB operating mode from "Multicast" to "Multicast IGMP", which changes the NLB VIP MAC to 0100.5exx.xxx range, allows forwarding to occur as expected.

Q: and if I switch to Switch mode , that means all the profile and the settings on the servers will be dead and I need to recreate them. ???

A:Cisco Unified Computing System Ethernet Switching Modes

http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns944/whitepaper_c11-701962.html

- There is no impact to the configuration you have done to the service profiles.  they will continue to function as expected.  Switch mode has the FI behave more as a classic switch.  Most noticable is that Spanning tree will be enabled and if you have multiple uplinks from teh FI, spanning tree will start blocking redundant paths.

You will need to review your toplogy and what impact spanning tree will have.  Typically we have the upstream switch port defined as 'edge trunk', you will want to remove this line.

For preproduction and lab environment, PDI can assist qualified partners with planning, design, and implementation.  Considering reviewing the PDI site and open a case if you need more detailed assistance.

View solution in original post

Beginner

Same problem: in MS NLB I can

Same problem: in MS NLB I can configure for IGMP Multicast and then I can access the NLB URL/IP from other machines on the same VLAN on the UCS, but not from any other machines on the same VLAN that aren't on the UCS.  If I switch to Unicast I can't access the NLB from any other server period.

How do I get to the VLAN or Interface configuration menus on the UCS Fabrics in order to create static MAC entries for my IGMP MAC?  As a paper CCNA I can VLAN myself out of a wet paper bag, but can't figure out where to go in the fabric's (6248UP) menu.

ucsm-A#
  acknowledge     Acknowledge
  backup          Backup
  clear           Clear managed objects
  commit-buffer   Commit transaction buffer
  connect         Connect to Another CLI
  decommission    Decommission managed objects
  delete          Delete managed objects
  discard-buffer  Discard transaction buffer
  end             Go to exec mode
  exit            Exit from command interpreter
  recommission    Recommission Server Resources
  remove          Remove
  restore-check   Check if in restore mode
  scope           Changes the current mode
  set             Set property values
  show            Show system information
  terminal        Terminal
  top             Go to the top mode
  ucspe-copy      Copy a file in UCSPE
  up              Go up one mode
  where           Show information about the current mode
 

HI ALL did any try NLB on VM's using HYPERV on UCS blades

Hi Dan,

I'm working on the same issue and converted the UCS manager to Switch-Mode and had the same result:

- NLB virtual IP: pinging

- Server 1/ windows on blade 1: pinging

- Server 2/ windows on blade 2: not pinging

One ethernet uplink only to the upstream LAN switch( from Fabric-A - this is a recreate in a LAB environment). both two vNICs for the blade servers connected to the Fabric-A.

load balanicng / HA are not working ,,, if server 1 went down all NLB goes down ( no failover to the servers 2 ).

Do we need to do any other configuration on the FI or the upstream switches ?

Regards,

Mohammad

View solution in original post

Beginner

HI ALL did any try NLB on VM's using HYPERV on UCS blades

Thanks Dan and Mohammad for your answer and help

CreatePlease to create content
Content for Community-Ad
FusionCharts will render here