12-01-2013 01:15 PM
Hello,
I have two nexus 5000 setup with a vpc peer link. I also have an cisco c240 m3 server with a vic-1225 card that will be running esx 5.1. I also have some 4 2248 fabric extenders. I have been searching for some best practice information on how to best setup this equipment. The nexus equipment is already running, so its more about connecting the c240 and the vic-1225 to the nexus switches. I guess this is better to do rather than to connect to the fabric extenders in order to minmize hops?
All documention I have found involves setup/configuration etc with fabric interconnects which I dont have, and have been told that I do not need. Does anyone have any info on this? and can point me in the right direction to setup this correctly?
More specifically, how should I setup the vic-1225 card to the nexus? just create a regular vpc/port-channel to the nexuses? use lacp and set it to active?
Do I need to make any configuration changes on the vic card via the cimc on the c240 server to make this work?
12-02-2013 01:17 AM
Hello Herenco,
You can have one of the following network connectivity for UCS rack server with FEX
Figure 7. Cisco Nexus 2000 Series Fabric Extenders Design Scenarios, from Left to Right: Server vPC, FEX vPC, EvPC, vPC+
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps10110/data_sheet_c78-507093.html
On a high level, depending upon server IO traffic and traffic flows for other servers, FEX hardware model, oversubscription ratio, you can either connect the server to FEX or directly to N5K switchports.
Assuming you are connecting to N5K, make sure you exclude FCoE VLANs from vPC peer link
N5K configuration guide
VIC adapter configuration
Let me know if you get stuck.
Padma
12-18-2013 07:32 AM
Hello again, Im stuck
This is what I have done. I have created the vPC between my esx host and my two nexus 5000 switches, but it doesnt seem to come up:
S02# sh port-channel summary
Flags: D - Down P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended r - Module-removed
S - Switched R - Routed
U - Up (port-channel)
M - Not in use. Min-links not met
--------------------------------------------------------------------------------
Group Port- Type Protocol Member Ports
Channel
--------------------------------------------------------------------------------
4 Po4(SD) Eth LACP Eth1/9(D)
######################################################################
vPC info:
S02# sh vpc 4
vPC status
----------------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
------ ----------- ------ ----------- -------------------------- -----------
4 Po4 down* success success -
#######################################################################
vPC config:
interface port-channel4
switchport mode trunk
switchport trunk allowed vlan 20,27,30,50,100,500-501
spanning-tree port type edge trunk
vpc 4
interface Ethernet1/9
switchport mode trunk
switchport trunk allowed vlan 20,27,30,50,100,500-501
spanning-tree port type edge trunk
channel-group 4 mode active
########################################################################
Im unsure what I must configure on the cisco 240M3(esx host) side to make this work. I only have the two default interfaces(eth0 and eth1) on the vic-1225 installed in the esx host, and both have the vlan mode is set to TRUNK.
Any ideas on what I am missing?
Message was edited by: HDA
12-18-2013 09:24 AM
Hello,
Have you configured LACP on ESXi with the two vmnics ?
==>> channel-group 4 mode active
If not, can you change it to mode on instead of active on N5K and verify the outcome ?
Padma
07-07-2015 11:38 AM
Hi...
i'am also facing the same issue, how to configure LACP on ESXi with the two vmnics?
i have configured channel-group 4 mode active and tried "on" as well on the switch side configuration but the interfaces are not joining in the port-channel as teaming is not configured on server side,
please advise on how to configure teaming on ESXi ?
07-08-2015 11:54 AM
Which ESXi version do you use !
Beginning with VMware vSphere 5.1, VMware supports Link Aggregation Control Protocol (LACP) on vSphere Distributed Switch (VDS) only.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2051826
07-08-2015 04:53 PM
Hi walter thanks for reply, we are using ESXi v5.5 and our server team have configured it as as standard v switches and not distributed switch. So I have referred in some cisco and vmware documentation and configured as etherchannel mode on... but after configuring as etherchannel mode on the host connection is not stable and gets disconneted in a period of while or so...please advise... the best practice to do this type of configuration....
07-08-2015 09:35 PM
http://blog.ipspace.net/2011/01/vmware-vswitch-does-not-support-lacp.html
07-10-2015 06:16 AM
Thanks walter for the valuable information, accordingly i have configured the etherchannel mode on from nexus switch side which is in vPC, but now we are facing the issue that host server losing connectivity with the vcenter very ofte, the host server is configured with IP has as recommended, any advise please.
i have followed the steps as mentioned in the document (Topology 1)
http://www.cisco.com/c/en/us/support/docs/switches/nexus-5000-series-switches/117280-config-fcoe-00.html
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide