cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1734
Views
0
Helpful
3
Replies

Nexus 1000v with Appliance Ports and NAS

d.fairbrother
Level 1
Level 1

I have been looking at designs for the use of Appliance Ports for directly attaching a NetApp NAS device to Cisco UCS.

The basic design seems a little flawed and requires additional NIC's on each host and 4x Connections on each Controller to be able to present NFS Volumes across both NetApp controllers.

I seem to think that an alternative design that encompasses the Nexus 1000v would be more suitable for an Active/Active connect across both Fabric Interconnects to both NetApp Controllers.

I would then be able to use the existing two NICs that are already configured in the Nexus 1000v with a port-channel and MAC-pinning.

Does anyone have any experience in attempting this or seen any documentation relating to the use of UCS Appliance Ports, and the Nexus 1000v?

3 Replies 3

Robert Burns
Cisco Employee
Cisco Employee

Where & how are you bringing the 1000v into this design when talking about UCS & NetApp direct connect?  If the NetApp will be primarily serving UCS blades/rack servers, then directly connecting at least 1 x 10G controller to each FI is the way to go.  If you have 2 available ports on each controller you can team and port channel the two providing redundancy and increased throughput.  Both NetApp controllers are Active so your storage would be reachable through either Fabric.

The 1000v is an ESX host based L2 switch, not sure how you are thinking it operates.  You can definately use it with your ESX hosts, but its designed specifically in terms of UCS appliance ports.

Robert

Thanks Robert,

I have two NIC's already set up and managed by the Nexus 1000v which will provide all LAN connectivity for a range of vLANs.

The documentation around the configuration of a NAS through Appliance Ports refers to creating NICs specifically for the NAS/NFS connectivity in each UCS Service Profile. These would then need to be connected to a local vSwtich on the ESXi host and a vmKernel port created for the NFS connectivity.

Instead of using the seperate NIC's I want to use the two already controlled by the Nexus.

Also the Applaince Port configuration doesn't seem to control the flow of traffic very well from the Host to the NAS unless I use an Active/Standby configuration on the vSwitch to push traffic always over the A Farbic Interconnect or B Fabric Interconnect. This causes a problem when trying to load balance across both NetApp controllers and the NFS Volume can only be presented via a single controller at a time.

The Appliance Port configuration doesn't support a LCAP port-channel across a pair of Fabric Interconnects as they do not support VPC, but the underlying Nexus 1000v would facilitate a Port-Channel across both sides.

I was thinking that if I can create a Port-Channel at the NetApp controller and a port-channel at the Nexus 1000v end, whether communication would flow correctly, even though the Fabric Interconnects would be unaware of the Port-Channel.

When I white board the designs, I need at least 4x ports per controller to facilitate an Active/Active configuration across the whole environment, but I only have 2x 10GBe ports per controller. So am trying to decide whether to single home each controller to each Fabric Interconnect or to Active/Standby the configuration cross-connected to each FI.

Comments inline:

Regards,

Robert


I have two NIC's already set up and managed by the Nexus 1000v which will provide all LAN connectivity for a range of vLANs.

The documentation around the configuration of a NAS through Appliance Ports refers to creating NICs specifically for the NAS/NFS connectivity in each UCS Service Profile. These would then need to be connected to a local vSwtich on the ESXi host and a vmKernel port created for the NFS connectivity.

[Robert] There are two pieces here, Host (ESX) connectivity and Target connectivity (NetApp NAS).  If you plan on directly attaching the NetApp to the UCS fabric Interconnects you need to configure the interfaces as "Appliance Ports"  These interfaces would likley be assigned to the VLAN of your NAS network.  As for the host portion now, your ESX hosts can either use dedicated NICs (assuming you have enough) or shared NICs to access the NAS.  NAS irrespective of the 1000v at this point.

Instead of using the seperate NIC's I want to use the two already controlled by the Nexus.

[Robert] No problem.  You will need to configure a port profile on your 1000v for the NFS vlan.  Next you'd create a vmk interface which each host would access the NAS, and assign the vmk to the 1000v port profile.

Also the Applaince Port configuration doesn't seem to control the flow of traffic very well from the Host to the NAS unless I use an Active/Standby configuration on the vSwitch to push traffic always over the A Farbic Interconnect or B Fabric Interconnect. This causes a problem when trying to load balance across both NetApp controllers and the NFS Volume can only be presented via a single controller at a time.

[Robert] This is a design limitation.  The two fabric Interconnects are NOT clustered.  Therefore you would need a path from fabric A, and another from fabric B.  This requires you to  manually balance your hosts NAS access by having some access via one path and other via the other path.  For added redundancy I'd suggest teaming your NetApp interfaces across controllers.  So take port 1 from controler 1 and team it with Port 1 on controller 2.  This way both paths are protected from a single fabric or controller failure.  You are correct from here though that a single host will not be able to use both path's simultaneously as Active/Active.

The Appliance Port configuration doesn't support a LCAP port-channel across a pair of Fabric Interconnects as they do not support VPC, but the underlying Nexus 1000v would facilitate a Port-Channel across both sides.

[Robert] You can team NetApp ports together using LACP - assuming they are connected to the same FI.  VPC-HM (aka Mac pinning) will give you access to each fabric, but you may want to be careful here.  The NAS traffic could essentially go out the UCS uplinks, and back down the opposite fabric if they're all using the same VLAN. To prevent against this you may want to use a separate VLAN on each fabric Interconnect for NAS access, and ensure it doesn't exist any UCS uplinks (assuming access to the NetApp is not required outside of UCS).  If you find external access to the NAS is required, you might be better served locating the NAS northbound of UCS.  Then allowing your NAS VLAN all the way through.   This way 1000v can pin traffic to any uplink with the gaurantee of reaching the NAS. 

I was thinking that if I can create a Port-Channel at the NetApp controller and a port-channel at the Nexus 1000v end, whether communication would flow correctly, even though the Fabric Interconnects would be unaware of the Port-Channel.

[Robert] No.  A 1000v Port Channel for UCS only supports VPC-HM (mac pinning)  This is a host managed port channel and you would NOT be able to team this with a Port Channel on the NetApp side in this regard.

When I white board the designs, I need at least 4x ports per controller to facilitate an Active/Active configuration across the whole environment, but I only have 2x 10GBe ports per controller. So am trying to decide whether to single home each controller to each Fabric Interconnect or to Active/Standby the configuration cross-connected to each FI.

[Robert] If you require Active/Active you'll need the NAS northbound as mentioned above, otherwise there's going to be a manual amount of work pointing 1/2 your hosts to each path respectfully.

Review Cisco Networking for a $25 gift card