cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3287
Views
18
Helpful
12
Replies

Vblock 0

ambi
Level 1
Level 1

I was going through the Vblock0 architecture and saw a network diagram where the storage (EMC Celera) is connected to the nexus 5k instead of connecting directly from the 6100 fabric interconnects.


Is there  a specific reason to do that and how the storage traffic flows till the N5K as i thought we cannot have multi hop with FCoe

any insights??


Ambi

1 Accepted Solution

Accepted Solutions

Ambi

Currently UCS doesn't support FCoE or FC targets directly connected.

All storage traffic in UCS goes on the FC uplinks and not the 10 gig ports which are FCoE capable.

It will change in the near future (next release of UCS software).

So for now, yes the link between UCS to the 5k in the Vblock architecture is FC. The 5k could connect to a FC or FCoE array.

Thanks

--Manish

View solution in original post

12 Replies 12

Manish Tandon
Cisco Employee
Cisco Employee

Ambi

You cannot storage target directly to the 6100's right now.

The FI's only work in NPV mode and need to connect to a NPIV enabled storage switch which the Nexus5k is.

Thanks

--Manish

Thanks Manish

so it would mean that we have a FC - FC connection between the 6100s and the N5K or it can be over 10Ge FCoE

Ambi

If you put a fiber expansion module in the Nexus 5000 and purchase a storage services license you can connect the backend CX4 to the Nexus 5000 fiber ports and do your FC zoning there. In this configuration the Nexus 5000 acts like an MDS switch.

The EMC NS120, 240 are CX4s with a Celerra, the Celerra gets connected to the CX4 directly via FC but there are additional FC front end ports on the CX4 that can be used just like on a stand alone CX4.

Ambi

Currently UCS doesn't support FCoE or FC targets directly connected.

All storage traffic in UCS goes on the FC uplinks and not the 10 gig ports which are FCoE capable.

It will change in the near future (next release of UCS software).

So for now, yes the link between UCS to the 5k in the Vblock architecture is FC. The 5k could connect to a FC or FCoE array.

Thanks

--Manish

Are you confirming that the next code for UCS will allow the 6100 interconnects to utilize FCoE on uplinks to the Nexus 5000? Is this using FIP? Is this the code expected in December this year? Or is it just speculation? We just made our first UCS purchase and I would really like to set it up without purchasing the FC expansion for the 6100's.

The next major release for UCS (1.4) will have this functionality i.e directly connecting FC or FCoE targets.

It will *not* be multi-hop FCoE i.e the FCoE target needs to be directly connected and not through a Nexus 5000. Multi-hop will be later next year.

If you need to go through a Nexus 5000, you can FC to the Nexus 5000 and then the FCoE target from there. That works now but does require a FC module which you are trying to avoid.

I am not commenting on the date for the release which is not set in stone (a lot of factors as you can imagine). December 2010 seems a reasonable assumption though.

There will be caveats though depending on the topology etc..

Please contact me offline (mtandon@cisco.com) or your account team if you require more information and it should be possible to provide it to you.

Thanks

--Manish

Manish,

     I was not aware of the FCoE so are you saying that I will be able to connect my Storage Array (Target) on the 10GE ports and run FCoE from it to the FICs ?

     IF this is the case then:

     1 - Do I still require the FC module on the FICs ? (I guess I do even if I am not using any port of it as it provides the fabric logins ?)

     2 - If I am running FCoE from the Target to the FIC and obviously from the UCS Chassis to the FICs (as always) will the FIC remove the FC frame from the payload and encapsulate it again to send it to the target ? (As you say it will not do FCoE Multi Hop ... but since the initiator and target are both running FCoE that means that the FIC will have to decapsulate and encapsulate the FC frame right ?)

Thanks

Nuno Ferreira

Hi Manish,

According to Cisco vblock document. Nexus 5k  will act as SAN switch and for vblock 1 MDS will act as SAN switch. Why can we use either nexus 5k or MDS for both vblock design.

Foyez Ahammed

Foyez

There is no mandate to use a particular fabric switch .. The suggestion is just a recommendation as MDS scales better .

Remember with a N5K you are limited to 8 x 4G FC ports or 6 x 8G FC ports

Narayan

Well ... Technically you are correct ...

The thing is .. if you want to call your solution a vBlock 0, 1 or 2 you have to have the components has per EMC Architecture ... or else you wont be able to call it vBlock

I know this because I just delivered one and to change a few components to optimize performance we had to submit it to EMC so it would get approved by their vBlock BU ... and it took some weeks

Cheers

Nuno Ferreira

Foyez


I believe it has already been addressed by others i.e the applications and the storage (FC) port density dictates what is being used for the "validated" solution.


As per one of the Vblock solutions architect

"Type 0 is directed at NAS shared storage, thus the Nexus5000. 

Vblock type 1 is FC (Boot and Shared Storage) also, in order for this to "expand" on the FC side, a MDS is needed."


--Manish

Hi,

We are trying to put UC on VBLOCK, so there is a requirements from VTG to have FC SAN. According to Cisco and EMC we can use

FC SAN in VBLOCK 0 and still call it a VBLOCK 0. So in this case can we use only MDS / Nexus 5k in VBLOCK 0  and VBLOCK 1. By this we just have to  test only one device. Or Its is mandatory to use nexus 5k for VBLOCK0 and MDS for VBLOCK 1 ( for FC SAN ).

Foyez Ahammed

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card