cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3425
Views
9
Helpful
8
Replies

N1kv L3 Mode

james21970
Level 1
Level 1

All,

I've read through this doc several times, and there's something bothering me. In the section where it's discussinig the various methods for deploying in Layer 3 mode, the first scenario is Cisco's recommended solution, but when they discuss scenario 2, which discusses using a different VMK interface, they talk about the possibility of dropping keepalives on the VEM. This doesn't make any sense. I would think that if you used one VMK, you'd be more likely to drop keepalives to the VEM b/c of heavy vMotion's and whatnot.

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html#wp9000401

Also, in scenario 2 they state that you need to use a seperate vmnic and to will need to use the vmware vswitch (?!?!)

The whole thing seems to be backward, and this doc is in REV3.

I have a lab running several ESXi hosts, and have utilized  two VMK interfaces (one dedicated to vmotion traffic and the other dedicated to management for L3 mode) and have no issues with this, but In our production environment, we have hundreds of customers and if there is a chance that the VEM's could loose keepalives due to heavy vmotion, then why would Cisco recommend this solution?

James

1 Accepted Solution

Accepted Solutions

James,

Take a look at the VMware KB article 2010877. Essentially it says

"The VMkernel TCP/IP stack uses a single routing table to route traffic. If you have multiple VMkernel network interfaces (vmknics) that belong to the same IP subnet, the VMkernel TCP/IP stack picks one of the interfaces for all outgoing traffic on that subnet as dictated by the routing table.

For example, if you have VMkernel ports configured like this:

  • One VMkernel port for vMotion, named vmk0
  • Another VMkernel port for iSCSI, named vmk1

If both of these vmknics are configured to be on the same IP subnet, the VMkernel TCP/IP stack chooses one of the two interfaces for all VMkernel traffic (vMotion and NFS) going out on that subnet.

Configurations with more than one vmknic interface on the same IP subnet should be avoided, unless the vmknics on the same subnet are bound together using iSCSI port-binding or configured for multi-NIC vMotion. "

My answer to the question on the old post is wrong. I posted that answer before I was aware of the problem above.

Let me ask you a couple of quick questions.We might be able to come up with a better solution if we understand how everything is connected

  • Do you plan to put network connections on the Nexus 1000V?
  • How many uplink vmnics are you adding to each upstream virtual switch?

My concern is that you are still going to run into an issue where your mgmt traffic and vmotion traffic get combined because of the VMware problem. This could lead to a problem for mgmt traffic while you are doing vmotions.

It would be best if you could put vmotion traffic on a different subnet from your management. This way you know where all your traffic is flowing and you don't have to worry about network congestion.

louis

View solution in original post

8 Replies 8

lwatta
Cisco Employee
Cisco Employee

James,

We are in the process of cleaning up a lot of this documentation. I'm guilty of this myself. A lot has changed in the N1Kv code since much of the L3 documentation was written.

For the VEM we recommend using the ESXi mgmt vmk nic if at all possible. There are two reasons for this.

  1. VMware doesn't handle multiple NICs on the same subnet attached to different virtual switches correctly. Traffic will actually flow only up one vSwitch. They describe this problem in  KB article on their site.
  2. If you create another vmk on a different subnet there is a good chance you might need static routes on the ESXi hosts. Nobody wants to create static routes.

There is no requirement that you have to use the ESXi mgmt vmknic but it makes life easier because of the two above issues.

When it comes to vMotion I know VMware recommends a dedicated vmknic for vmotion. I think what you have described above is fine. Keep vMotion on a dedicated vmknic and use the mgmt vmknic for VEM control.

louis

Louis,

First off, thank you for the reply.

I do have a couple of questions (surprise )

When you say 'VMware doesn't handle multiple NICs on the same subnet attached to different virtual switches correctly.' are you referring to the situation where you use both the 1kv and the vmware vswitch and assign a different *vmnic* to each switch, and then have a seperate vmk ride each vmnic?

Also, I saw a post where a poster was asking about a second vmk on the same set of vmnic's within the same port profile, and same subnet, and you were responding to him (https://communities.cisco.com/docs/DOC-28631)

I was thinking along the lines of having one vmk for managenemt (this would also be used for the n1kv L3 control), and one for vMotion (both within the same subnet,, like what you were responding to in the post from above) and both of them riding the same set of vmnic's that are part of the same ethernet uplink port-profile ( our other two vmnic's are part of a different port-profile,and carry our VLAN range for customer VM data traffic).

My problem is this; the last team that set this up, created one vmk, and it's being used for both management and vMotion.

The easiest solution would be to create another vmk strictly for vMotion (this would alleviate the need to update DNS for our hosts), but do we keep the new vmk for vMotion in the same subnet, or should the new vmotion vmk be in a different subnet?

Is there any documentation on what truly is the best?

I appreciate any insight you have,

James

James,

Take a look at the VMware KB article 2010877. Essentially it says

"The VMkernel TCP/IP stack uses a single routing table to route traffic. If you have multiple VMkernel network interfaces (vmknics) that belong to the same IP subnet, the VMkernel TCP/IP stack picks one of the interfaces for all outgoing traffic on that subnet as dictated by the routing table.

For example, if you have VMkernel ports configured like this:

  • One VMkernel port for vMotion, named vmk0
  • Another VMkernel port for iSCSI, named vmk1

If both of these vmknics are configured to be on the same IP subnet, the VMkernel TCP/IP stack chooses one of the two interfaces for all VMkernel traffic (vMotion and NFS) going out on that subnet.

Configurations with more than one vmknic interface on the same IP subnet should be avoided, unless the vmknics on the same subnet are bound together using iSCSI port-binding or configured for multi-NIC vMotion. "

My answer to the question on the old post is wrong. I posted that answer before I was aware of the problem above.

Let me ask you a couple of quick questions.We might be able to come up with a better solution if we understand how everything is connected

  • Do you plan to put network connections on the Nexus 1000V?
  • How many uplink vmnics are you adding to each upstream virtual switch?

My concern is that you are still going to run into an issue where your mgmt traffic and vmotion traffic get combined because of the VMware problem. This could lead to a problem for mgmt traffic while you are doing vmotions.

It would be best if you could put vmotion traffic on a different subnet from your management. This way you know where all your traffic is flowing and you don't have to worry about network congestion.

louis

Louis,

we are currently using the 1kv in all of our deployments (on the UCS blades) and are utilizing 4 vmnic's carved out of our Ciscoi VIC.

Two of the vmnic's are for our system uplink (management) traffic, and the other two are for customer VM traffic.

Based on what you are stating here, it seems the best bet would be to create another vmk, and allocate it to a different subnet, (and possibly completely differnt VLAN as well) specifically for vMotion. If we did this, we could then use the current vmk (this is the one that is set for both management and vMotion) for ESXi management and n1lv layer 3 ops.

So in the end, we would have two vmk's in different subnets, both riding the same set of vmnic's from the host.

Does this sound viable?

James,

That's exactly what I would do. Create a new vlan and subnet for your vmotion interfaces.

louis

Louis,

Thanks for your time on this.

James

Louis,

L3 control makes perfect sense when it's okay to combine it with management vmk0 and run it through the VEM. Simple subneting. Simple routing. Simple connectivity troubleshooting.

But how about situations where it's not okay, where vmk0 must remain on a standard vswitch (scenarios 2 and 3 at the N1Kv deployment guide)? In these cases, a new vmk/subnet/VLAN are needed for L3 control and, barring host routes, that VLAN is probably going to be non-routed. This all results in unnecessary complexity to the hypervisor routing table that, in my opinion, adds far more risk than the minor troubleshooting benefit of L3 control. In other words, why not just do L2 control in those situations?

I guess the real question is:  is there any functional advantage of non-routed L3 control over L2 control?

-Craig

Craig,

It's a great question. In your case L2 does make sense, however if you run into a problem troubleshooting L2 mode can be difficult. With L3 mode you can use simple commands like ping to verify network connectivity. Also going forward you are going to see some solutions that require L3 (even if its non-routed L3). For example VXLAN Gateway requires L3 control mode.

I agree it's not as simple as it could be. I'd really like to see us work with VMware so that we don't require a dedicated vmknic to do L3 control for the VEM. We have the ability to do that today with Hyper-V.

louis

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: