cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
666
Views
0
Helpful
5
Replies

Mixed Server Architectures

visitor68
Level 5
Level 5

Hi, folks:

With  this question, I would like to engender a discussion that is --  hopefully -- not vendor-centric, but engineering-centric, instead.

What  are the virtues of deploying a data center with a uniform server access architecture across the entire server farm? This data center would leverage unified fabrics (FCoE). I'm thinking of the mixed  environment that results when deploying rack mount servers in a N5K  system (by system, I mean 10G 2232 FEXs plus 5K, for example) along with deploying chassis-based blade servers that would typically have their I/O  aggregated. In that case, the aggregated I/O of the blade chassis cannot  be connected to a Host port of a 2232 FEX. So, what results is a  separate architecture and topology to support blade servers.

In a data center in which abstraction layers have been established across the entire data center to provide stateless computing, application mobility,  and virtualized storage, it seems that there is a definable advantage to  maintaining a server access architecture that is uniform in terms of  access speed, oversubscription, and traffic patterns. It makes for  easier troubleshooting and helps create more definable and predictable SLAs, which is a great concern in the age of Saas and IaaS cloud computing.

Just for clarification...there is no generic answer for the right way to connect servers  in a data center. There are a lot of variables to consider. I am not  asking for a generic how-to answer about that. Nor am I looking to have a discussion focused mainly on product. Instead,I am putting forward  some ideas/thoughts/considerations to make with regard to uniformity in  the server access layer architecture and topology of the data center and  would like to engender a discussion accordingly.

Any thoughts from anyone would be greatly appreciated.

Regards

5 Replies 5

visitor68
Level 5
Level 5

c'mon. No takers? This board aint what it used to be....

Jon Marshall
Hall of Fame
Hall of Fame

The more uniformity or to put it another way, standardisation, you can introduce the better is a very general statement but it has a lot of truth to it. You yourself have outline some of the advantages of using this approach. But you can go too far with this and end up trying to fit a square peg into a round hole.

Designing and implementing a DC solution is a fine balancing act. A key design consideration, which incidentally doesn't really apply to Campus designs etc. in the same way, is flexibility. Now you can take flexibility too far and end up with a pigs ear of a design where there is no consistency whatsoever and ends up looking like an "evolved" (and not in a good way) rather than a planned design. But you can also design yourself into a straight jacket where because you have adopted the one and only way to do something you have to make everything fit into that design. And typically in a DC not everything does fit into that one and only way of doing things.

It's an interesting question and it also relies heavily on whether you are designing a DC from scratch or whether you are redesigning what you already have. Blade chassis are very widely used in DC environments so unless you are designing from scratch it would be unlikely you could discard all that equipment.

I'm not that familiar with Nexus architectures so i may be way off base here, but i don't see this as an entirely separate topology. FCoE aside both rack mount servers and blade servers will still share a lot of the common architecture within the DC. In your architecture what role do the N5ks play and where do they connect back to ?

Jon

Looks like Cisco are trying to address the very issue you bring up -

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps10110/white_paper_c11-594477.html

Jon

Jon/Joe:

I happen to be writing a position paper for my company about this very issue.Although ethernet pass-throughs consume more switch ports than a blade switch that provides port consolidation, I think in heavily virtualized environments with FCoE, they are just what the architect ordered. Moreover, if connected an a 2232 FEX, the port cost is pretty minimal. a 40-port 2232 with 32 host ports and 8 fabric ports runs about 12,000 dollars. Thats about $300 for each 10G FEX port.

Here are more of my thoughts.

Would love to get feedback! Please feel free to be blunt.

Today’s heavily virtualized service-oriented data centers demand a lot of the network infrastructure. It must be very stable, scalable, resilient, flexible and intuitive in its forwarding decisions. The applications which provide the services leveraged by an organization to support its business drivers all rest on the ability of the network to sustain workloads of widely varying characteristics.

The average large enterprise data center can run hundreds of applications, each with its own compute, storage and network requirements. Many organizations do not have documented baseline resource requirements based on pre-deployment testing for each of these applications. Furthermore, the number of clients that come to depend on the application’s services typically increase greatly after the application is placed into production. Adding complexity to all this, are the bandwidth requirements placed on the network infrastructure when the applications are running on virtualized servers – oftentimes to the tune of anywhere between 5 and 10 virtual machines per processor core. That equates to up to 80 VMs per dual-socket, quad-core server!

Adding to the demand on the legacy data network are the requirements imposed when converging the LAN and SAN. Unified fabrics present a long-term investment that may eventually lead to a large reduction in operational and capital expenditures. But far from simplifying things, FCoE adds significant complexity in deploying and maintaining the converged data network. Data center switching appliances that were running only one protocol stack (TCP/IP) are now running three – TCP/IP, FCoE and FC.  In addition to the complexity, FCoE imposes a demand on edge and inter-switch link bandwidth that typically varies between 20 and 50 percent. This additional operational complexity and demand for bandwidth calls for a simplified, yet robust, architecture that will mitigate these challenges.

Service-oriented designs are the basis for large enterprise private and public cloud computing networks. The services offered to consumers of cloud services must be highly available, reliable, secure, and meet pre-defined Service Level Agreements (SLAs) and general user-experience expectations. Consistency in the deployment of network policies (QoS, security, VLANs, routing, etc) and the management and portability of those policies as virtual machines are migrated throughout the network are key to sustaining client SLAs. The resiliency offered by automated virtual machine migration and orchestration make it imperative that the server access architecture be uniform with regard to the network switch’s features and functionality and bandwidth capabilities. Such uniformity also contributes to easing the management burden by providing predictable traffic flows and deterministic failover.

AS YOU SAID BEFORE JOE,


"Just imagine the case in which a VM running a mission critical application on a rack mount server that enjoys a dedicated network access port for its CNA is migrated to a blade server with a 4- or 8-to-1 oversubscription ratio. Now add the bandwidth utilization of the FCoE traffic. How will that impact the performance of the application? How will the client's experience and promised SLA be impacted? One can never really know unless extensive failover testing is done on each and every application, which is typically not the case in any data center with hundreds of applications running."

With the rising cost of maintaining large enterprise data centers, it is highly desirable to construct server farms that leverage server consolidation into smaller footprints that also require less power and cooling. This requirement can easily be met by deploying blade servers when possible.

Given the above requirements, the demands made on today’s data center networks are much greater and more difficult to maintain than at any time in the past. Data center server access architectures must be uniform and provide sufficient edge and core bandwidth to support the very wide spectrum of applications and services on which the business depends for its survival and success. 

Selecting the Correct Network I/O Blade

Central to the deployment of the blade chassis are the I/O modules that will interconnect the server CNAs with the external switch fabric. While chassis-based FCoE switch solutions, such as the Dell M8428-k, can offer considerable benefits over a top-of-rack FCF (FCoE Forwarder) deployment, it is not the right fit for every organization. It depends on the extent to which an organization would like to unify its network fabric, as well as previous technology investments that may have already been made.

For those deployments in which a top-of-rack or end-of-row FCF approach is taken, most chassis manufactturers offer a KR-based, CEE-enabled 10G Ethernet Pass-Through Module (PTM). The 10G PTM enables an alternative blade server access architecture that alleviates complexity and yields a more manageable data center while retaining many of the advantages of blade servers and server virtualization. Deploying 10G PTMs result in a 1:1 correlation between a server’s 10G CNA port and a 10G top-of-rack switch port. This dedicated-connection-per-server approach provides the necessary bandwidth to support highly virtualized, converged networks by removing the oversubscription and added latency inherent in a switch-based I/O solution. Moreover, it provides parity with the access approach taken with rack mount servers, resulting in a server access architecture that is simplified, scalable, and predictable in terms of traffic patterns and network policy applicability. The KR-based passive midplane is designed to support 40G backplane Ethernet for future growth and a more attractive ROI.

THOUGHTS?

Jon? How come hyou disappear everytime I ask you to read something? Am I that boring? lol