cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3824
Views
0
Helpful
20
Replies

UCS uplink to Nexus 2K

leeroi1526
Level 1
Level 1

Hi to All,

We are on the process of deploying 2 Nexus 7k, 2 nexus 5K and a bunch of nexus 2k in our data center, we use vPC inside our data center, we do have UCS is our DC which is currently connected to our 6509E core switch via port channel with 1GB link speed each physical link having a bundle of 4GB aggregated bandwidth. Since we will replace the 6509E with the Nexus platform, our 5K will be the aggregation and is having vPC to the 7K, while the 2k is also having vPC to 5K. My plan is to connect the the fabric of my UCS (total of 4 links in each fabric) via layer 2 port-channel via trunk mode on my nexus 2k, allowing specific VLAN on the trunk ports, this is the same set uo with my 6509E switch. My concern is does the N2K support this type of deployment? I am aware that I can connect the UCS either on my 5K's or 7K's I just want to know if there are any caveats if I use the 2K and atthe same time I don't have to procure addition SPF's if I connect it to my 2K. Please advise...

 

Best Regards,

 

20 Replies 20

Walter Dey
VIP Alumni
VIP Alumni

Best practise for UCS connectivity is 10G wire rate vPC to Nexus 5k or Nexus 7k ! Therefore N2k is not recommended. Btw. if you need native FC, it means FCoE, and not all N2k models support FCoE!

Hi Walter,

I don't need native FC as of this moment as we do have our own SAN infrastructure, can you tell me the issues that I may face if I use my 2K?

 

Regards,

Oversubscription and Hop Count.

Don't forget, that in any UCS domain, you might have East - West L2 traffic, that has to leave and reenter the UCS domain (has been discussed in this forum many times), in addition to East - West traffic which is L2 switched inside UCS domain on the fabric interconnect.

I remember the early days of UCS, when customers did 1G North bound connections, and ended up in a bottleneck.

I know a hundreds of UCS implementations, and not a single one has connected UCS to a N2k (FEX).

Anyway, it's a commercial argument, saving port licenses, I know;

Again: best practise is vPC from UCS FI to a N5k or N7k.

Good luck

Walter.

We will go and use 10G SFP, one in each FI. FI (A) will be connecting to N7K1 while FI (B) will be connecting to N7K2 as a trunk port. I assume to achieve vPC from UCS FI to N7K's require 2 10G SPF's in each FI, I'm afraid we do not have in stock as of now. 

 

Regards,

Jerone

Keep an eye on your traffic levels.  I don't know if you don't have the fiber or the dollars - but you may find that the volume of your traffic is not 10G or close.

Hi Jerone

This is perfect ! without vpc you just have the situation, that loss of a uplink switch, will result of all the vnic/fnic UCS Interfaces going down.

And btw. the cost of 10G is essentially only the port cost / license; copper Transceiver cables are supported for N7K and UCS at misc. lengths, and much less expensive than optical SFP+, see http://www.cisco.com/c/en/us/td/docs/interfaces_modules/transceiver_modules/compatibility/matrix/10GE_Tx_Matrix.html

Walter.

I have around 500 VMs in UCS.  My east/west ( routed ) traffic is minimal.  My primary environment is Citrix centric.

All systems of a common role are placed on the same vlan so its just L2.  

We engineered sets of 4 x 10G connections because the 3 sets of FIs we have originally sat 10m from the core routers.  Way, way overkill in terms of bandwidth requirements but a 10G SR optic is close enough to a 1G so .. might as well.

At 7am the 2 10G links from FI_A transmit and receive are 31Mbps each, the 4 link 32G san port channel is 460 Mbps.  30 minutes later this has dropped to 54Mb and 384 Mb.

The 2 10G links from FI_B are under 30 Mbps total and the 4 link 32G FC port channel is 510 Mbps.  I could have run these off of 2148s connected to 5548s with L3 daughter cards and not bought Nexus 7KS or VSSs.

What is your current utilization ?

  I have multiple 61xx Fabric Interconnects and 6248 Fabric Interconnects connected to 7Ks and VSSes. They could easily run on a handful of 1Gb links instead of the many, many 10Gb links we installed for them.  MRJ21 connectors can attractively take copper connections from rack to rack if you don't want to waste FEXes in every rack if you have low server densities in them.

 

I might also guess that you that if you are asking this question ... those 7Ks might be overkill too.  We are right sizing our 7Ks to 9Ks and not giving up functionality.

 

 

 

A port-channel of 1 G links is limiting single stream performance to 1G; todays VMware vmotion or MSFT live migration can easily fill 1 10G pipe.

Regarding 7k versus 9k: ok if you are aware and happy with any feature non compatibility !

In theory - sure,  In _this_ practice. no.  Thats why I suggested he look at his stats.

They vMotion within the same datacenter, distribute server workload and placements between multiple data centers or have clustered systems that don't move - AIX can't vMotion if you wanted it to.  The FEX to FI links are the limiting factor. not the L3 uplinks in this environment. I would not have believed it either until I checked - stats don't lie.

Not sure what feature non compatibility you are talking about.  What features used by 50% or more of 7K users can a 9K not do ?

Typical rathole discussion with a networking guy.

I agree with your arguments about N7k versus N9k !

Your statements about vmotion,... AIX are... weird and useless.

Please concentrate on the original question !


 

Your comment used vMotion as an indicator that bandwidth was required - "todays VMware vmotion or MSFT live migration can easily fill 1 10G pipe."

That bandwidth requirement might be accurate in your environment or your customers.  In mine or the ones I know of it is not needed. Hence I told him twice to check his utilization. 

Not everybody vmotions outside of a UCS environment, sometimes all they do is move from one blade in one chassis to another blade in another chassis .. all connected to the same FIs,

Check your bandwidth to see if you need 2 * 10G connections, you might even have 10G fexes.

 

Hi rrowlandkumc,

I don't have 10G Fex on our N2K, we only have 1G as of now, If I will use my 2 N7K's and provide 10G SFP in both N7K's down link to my UCS with 2 FI, can i configure vPC knowing that the fibers are connected with the two FI? On the UCs perspective can it be configure as a port-channel one is connected to fabric A while the other is on Fabric B.

Hi

From each UCS fabric interconnect you connect to each N7k, and create a port-channel with UCS Manager;

see Figure 5. Cisco UCS 6100 fabric interconnect directly connected to Cisco Nexus 7000

http://www.cisco.com/c/en/us/products/collateral/switches/nexus-7000-series-switches/white_paper_c11-623265.html

for vPC on N7k see

http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf

Walter.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: