I am having issues connecting a VIC1225 to our network, we were advised by our Cisco rep to purchase these as we required 10gb connection to our san/network. My network is a follows -
UCS C240 M4 SFF (Card 1) -> C3850 -> VNXe SAN
UCS C240 M4 SFF (Card 2) -> C3850 -> vCenter
we purchased 2 1225's and installed these in Slot 2 & 5. Now the 1225 cards show and i can configure them but i get no comms through the network. If we put a a i350 Quad 1gb card in the UCS i have no problem connecting to the network/SAN but if i replace this with the 1225 then it does not work.
The settings i have tried on the 1225 are -
Trunk/Access, VLAN/All/0, just having 1 card/2 cards.
Is there anyway to use these as just standard NIC's (as wont be using the virtual side) or am i missing some settings somewhere.
Any help configuring these would be a dream as i am lost now on how to get these working (most likely a simple configuration issue).
Many thanks in advance.
Please try resetting to factory defaults via the Inventory, Cisco Vic Adapters, General tab, "Reset to Defaults".
I've seen a few cases where the cards came from the factory with previous test diag configs.
Also, as the 3850 is not a FCOE capable switch, make sure the "enable FIP mode" is NOT checked in the 'modify adapter properties' on the adapter(s) general tab.
Few other question:
The 2 default vnics are trunk by default, and should allow all vlans.
If you are setting up esxi, make sure your mgmt vmk0 port is set to tag the vlan it should be using (assuming your 3850 ports are set to trunk)
Thanks for that, I will try these ideas tomorrow.
Ah, so you are connecting some directly to the storage controller ports without a switch in between?
Are these the vic 1225-T models then?
Been trying today and done the advised Kirk. I have had some success :-)
I can use one of the cards as a NIC and everything is working as it should. The only problem i have now is the iSCSI card.
I am connecting to the SAN with iSCSI through the C3850. There is no Trunk just access ports to connect to the initiators.
The connections can be viewed in ESXi and everything seems good. I can also ping the SAN initiators through the CIMC so i am getting a little confused now. Is there a setting to allow iSCSI traffic? Never had this problem with this before just seems to be this card!
ps the C3850 are all 10gb CAT6a ports. It is strange that if i put the old card (4 port 1gb nic) in and change the vnics to the appropriate ports then everything works great (with no changed to switch/esxi etc).
Greatly appreciate your help Kirk.
I would suggest setting your 3850 ports connecting the iSCSI 1225 port to trunk, and allow the iSCSI vlan on them.
Since it doesn't sound like you are doing iSCSI boot, then there is no VIC/CIMC config specific to iSCSI that needs to be set.
You just need to create two iSCSI initiator VMK ports in ESXi, tag them with your iSCSI vlan, associate each one with one of your VMNICS (vnic ports on 2nd 1225 card), and confirm the LUNs have multiple paths.
Also, don't try to set jumbo frames, until you have confirmed the LUNs are reachable at 1500.
If you can get VMKping tests to work from ESXi (make sure you specify the iSCSI VMK interface to ping from) but the LUNs are not reachable, then there is odd with LUN config.
For the 10Gb copper ports, you might want to hard code the speed and duplex in the 3850 config as I think I've seen 10Gb-T occasionally have speed negotiation issues (not necessarily on 3850s).
iSCSI is simply a TCP protocol that runs over port 3260, and transported the same as any other TCP protocol.
After trying all the solutions above I still did not manage to get this to work.
I managed to get 1 of the NIC's to work but data only which is what I wanted. Unfortunately I could not get the iSCSI data to transmit from the server to the SAN (no boot). I have placed the old 4 port 1GB NIC back in and all is working and the customer is happy with this but myself am not happy :-(
Cheers for your help Kirk but as mentioned we have gone down another route.