07-12-2011 04:15 AM - edited 03-01-2019 09:59 AM
Hi,
I am going to install three c210M2 servers and i am quite new to UCS world. As per the design i am suppose to install
4 x Vm's (CUCM, CUC, Presence, UCCX) Primary or Publisher on UCS 01
4 x Vm's (CUCM, CUC, Presence, UCCX) Subscriber or Failover on UCS 02
2 x Vm's (MeetingPlace) on UCS 03
I have couple of 3750X access switches for network connectivity. My question is how do i configure the network or NIC cards on UCS Server. As per my knowledge every server has 2 x 10Gig NIC cards and 1 x NIC for Integrated Management. I am confused how many NIC cards each VM will have and how do i configure all of these virtual machine NIC's on 2 x physical NIC per UCS. Any suggestions please.
Regards,
Asif
Solved! Go to Solution.
07-12-2011 06:14 PM
Asif,
Nexus 1000 would be helpful here, but if not an option and you are using standard vSwitch then both onboard NIC's should be used as uplinks in a single vSwitch for redundancy.
Here are some QoS suggestions that might help you also.
Rredundant physical LAN interfaces are recommended
One or two pairs of teamed NIC's for UC VM traffic. One pair is sufficient on C210 due to the low load per VM
One pair of NIC's (teamed or dedicated) for VMware specific traffic (mgmt, vMotion, VMware HA, etc)
SIDENOTE: NIC teaming pair can be split across motherboard and the PCIe card if the UCS C-series supports it, protects against PCIe card failure
Once we know exactly what vSwitch you are using like the last post asked, we may be able to pinpoint more, but this is based on using standard vSwitch
Hope this helps.
07-12-2011 03:35 PM
It depends on you vm environment design
R u going to vcentrr, distributed vswitch or nexus 1000 v
And then from the side you can setup uplinks and nic teaming with lacp and in your physical switch you van setup etherchannel and allow the used vlans
Also make sure ur qos and cos mapping is correct for this uc on ucs
Good luck
If helpful rate
Sent from Cisco Technical Support iPhone App
07-12-2011 06:14 PM
Asif,
Nexus 1000 would be helpful here, but if not an option and you are using standard vSwitch then both onboard NIC's should be used as uplinks in a single vSwitch for redundancy.
Here are some QoS suggestions that might help you also.
Rredundant physical LAN interfaces are recommended
One or two pairs of teamed NIC's for UC VM traffic. One pair is sufficient on C210 due to the low load per VM
One pair of NIC's (teamed or dedicated) for VMware specific traffic (mgmt, vMotion, VMware HA, etc)
SIDENOTE: NIC teaming pair can be split across motherboard and the PCIe card if the UCS C-series supports it, protects against PCIe card failure
Once we know exactly what vSwitch you are using like the last post asked, we may be able to pinpoint more, but this is based on using standard vSwitch
Hope this helps.
10-10-2011 10:24 AM
The bundle C210M2-VCD2 contains 7 physical NICs total:
1) 1x 10/100TX (Management)
2) 2 x 10/100/1000TX onboard
3) 4 x 10/100/1000TX Broadcom on PCI card
so in this bundle no 10 Gbps cards are included.
I would dedicate the Management port for ILO, then build 2 vlans on the vSwitch
vlan 10: ESXi Console with 2 VMNICs (thereof one onboard)
vlan 20: UC applications with 4 VMNICs (thereof one onboard)
Dependent on your DC-LAN reliability either build channels for each vlans to the external LAN to aggregate the
bandwith or use teaming to connect to seperate access switches (or mix both variants).
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide