cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4354
Views
0
Helpful
11
Replies

Slightly modified 2 NIC Configuration - supported?

ryan.lambert
Level 1
Level 1

Hi all,

Very new to the Nexus 1k, and actually still digging through a lot of documentation and finding things out. Figured this was a good place to drop this... though pardon me if it seems extremely basic or way off the mark.

We are going to run our VSphere deployment with N1KV, with our hosts having 10gig redundant uplinks to a pair of Nexus 5020 switches. I will create vPC (MAC, not HM) within the 1kV, and the Nexus 5k switches will be set up to trunk identical VLANs on each physical host uplink. Essentially, all the port-channeling is done within the software switch. I want to achieve an active/active setup with this. I believe I can do that in this setup (question #1)...

Second question: Our server team would like to house the Service Console NICs physically separate from the 10gig NICs. The 10gig NICs will carry VSM Mgmt, Control, Packet, VMotion, iSCSI, Guest traffic.

These separate 1 gig Service Console NICs will be hung off of our 2148-T FEX modules. This NIC will carry the IP I'm supposing I would need to bind the VSM Mgmt to for integration into the VSphere environment. Same VLAN.

Now, I know when I am setting up the VSM I need to bind it to the VSphere server, and this is where I am getting a bit hung up.

A) I'm not sure if this would even be a supported configuration.

B) I am not too keen on this, since I feel even if this was supported, we'd be introducing another layer of potential failure with a separate physical NIC, cable, port to account to. Obviously this is only my personal opinion and doesn't necessarily mean it is correct... why I'm here.

Is it recommended instead for the server administrators to create the Service Console connectivity on the same 10gig links I am carrying VSM traffic on?

Thanks in advance for your input.

-Ryan

11 Replies 11

lwatta
Cisco Employee
Cisco Employee

Ryan,

VPC-MAC pinning will give you active/active with failover.

In regards to your network topology, I don't see anything wrong with it. If you want to break out some of the traffic to the 1GB ports that fine. I don't think you are introducing another layer of potential failure as much as another layer of managment. We have customers doing both. It really depends on comfort level with N1KV and VMware. We would suggest running everything on the 10GB uplinks and take advantage of vPC-MAC pinning for load balance and failover.

Is the plan to have 1GB port connected to a vSwitch or N1KV? Either config is ok.

louis

Hi Louis,

Thanks for your reply. The original plan was to manage the 1GB service console NICs via vSwitch, and untagged ports on our 2148s in the same VLAN we'd have the VSM mgmt on.

Per suggested method, I believe what we are going to do now is put the service console on the 10GB links and remove the 1gig extras.

I am, of course, slightly more confused to get the ball rolling on the 1K install if we do this. I went through the videos, specifically: https://communities.cisco.com/videos/2532, and it seems we're going to need to create some port groups in vSwitch to facilitate N1KV communication anyhow ... I'm looking at a vswif right at the start.

Am I correct in this?

Thanks again,

Ryan

Ryan,

Yes you are going to have to start with a vSwitch configuration. Install ESX and all your network connections as you normally would. Then install the VSM on the vSwitch.

What I like to do is add one link when I install the VEM. Leaving vSwitch0 up and running with the SC and VMK interfaces. Once I have all the VEMs installed and showing up connected on the VSM, I go back and migrate the second nic along with the SC and VMK interfaces onto the VEM.

The key for migrating SC and VMK interfaces is to make sure that the port-profiles for them carries the system vlan xxx in the profile. Its very key to make sure you have system vlan directives for SC, VMK, Control, and Packet vethernet port-profiles and also on the uplink that those port-profiles hit the physical network on.

If you still have questions or concerns let us know.

louis

Louis,

Thanks a lot. That helps.

Just to make sure I am really grasping this, let me run this by you...

As mentioned we have two 10gb NICs, each of which is going to carry SC/VSM Mgmt (Vlan91), VMotion (Vlan92), iSCSI (Vlan93), VM-Data/Guests (Vlan94), Control (Vlan95), and Packet (Vlan96) networks in an active/active fashion.

My assumption is my system uplinks need to carry 91-96, similar to the individual trunk ports on my 5020s (no vPC on 5ks), with system directives for SC, VMK, iSCSI, Control, and Packet. Something like so in the end product:

port-profile system-uplink

switchport mode trunk

switchport trunk allowed vlan 91-96

no shutdown

channel-group mode on mac-pinning

system vlan 91,92,93,95,96

capability uplink

vmware port-group

state enabled

Then, I can create a simple access/untagged port-profile to be applied to individual guest machines, of which for now I only really need Vlan94.

Thanks again, I appreciate it!

-Ryan

Ryan,

That looks good. Just remember that if you want to connect your SC, VMK, iSCSI, Control and Packet to the N1KV that the port-profiles should like the following example.

port-profile type vethernet SC

switchport mode access

switchport access vlan xxx

no shut

vmware port-group

system vlan xxx

state enabled

Note the system vlan xxx on the vethernert port-profile.

louis

Ah, ha... this is the part that is going to confuse me I have a feeling.

If I create the system uplinks, I think that the server admin needs to apply them to the correct vmnic as part of the VEM install (https://communities.cisco.com/videos/2629). So, it sounds like even though the SC/VMotion/etc. VLANs are trunked via the System uplink profile I created in the 1k, I still need to create vethernet port profiles for them.

From the point of view of a guy who has done mostly all physical switching + some handing off trunks to VSwitch-enabled ESX hosts, the concept isn't dropping.

I guess its more of an ESX disconnect for me: So I create the veth port profiles separate from my system uplinks, how does the server administrator put them to use? Is there documentation toward that in the Nexus libraries that I missed?

Ryan

Correct. You create the uplinks and the vethernet ports the VMs connect to. This gives you visibility and control of the VMs in the network. The uplink allows you to control the connection of the ESX host to the network while the port-profile for the vethernet connections gives you control over the indvidual Virtual Machines.

Let me know if you want to chat on the phone and/or run through a webex of a simple install.

louis

Hi,

I am in the same situation, I have 4 x 10Gig and 2 x 1Gig on my ESX servers. I wanted to use these for the below network traffic on Cisco nexus 1010.

My thought would be......

2 x 1gig -------> Service Console + vMotion

2 x 10 Gig ------> NFS (Ip storage)

2 x 10Gig --------> VMs

Please advise and how we can tie 2 x 1gig on to service console & vMotion VLANS

Thanks in advance

Chandra

Chandra,

Your desired setup is fine.  See my response to your other post here: https://communities.cisco.com/thread/21498?tstart=0

If you wanted to chop your uplinks into three groups rather than two, than just create one more (in addition to the ones in the other post) and move the necessary VLANs to that Uplink Port Profile only.

You can divide your uplink traffic into any configuration you want, with the main limitation being that each VLAN can only be allowed on one Uplink Port Profile.

regards,

Robert

Hi Robert,

thank you very much, this would help me a lot.

I am bit confused on the limitation "ONLY one VLAN can be allowed on on Uplink Port Profile"

Port Profile 1 - Service Console, VLAN 10

Port Profile 2 - vMotion, VLAN 20

can't I create a single Uplink Profile and allow two VLANs 10 & 20 with two uplink NICs for redundency?

Regards,

Chandra

Yes you can.  To better phrase it "each VLAN can only be allowed on one uplink port profile (if multiple uplinke port profiles are used)".

So a VLAN can't be allowed on two separate uplink port profile.  You can definately have multiple NICs for redundancy.

Regards,

Robert

Review Cisco Networking for a $25 gift card