With Haseeb Niazi and Chris O'Brien
Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Single-Site and Multisite FlexPod Infrastructure with experts Haseeb Niazi and Chris O'Brien.
This is a continuation of the live webcast.
FlexPod is a predesigned and prevalidated base data center configuration built on Cisco Unified Computing System, Cisco Nexus data center switches, NetApp FAS storage components, and a number of software infrastructure options supporting a range of IT initiatives. FlexPod is the result of deep technology collaboration between Cisco and NetApp, leading to the creation of an integrated, tested, and validated data center platform that has been thoroughly documented in a best practices design guide. In many cases, the availability of Cisco Validated Design guides has reduced the time to deployment of mission-critical applications by 30 percent.
The FlexPod portfolio includes a number of validated design options that can be deployed in a single site to support both physical and virtual workloads or across metro sites for supporting high availability and disaster avoidance. This session covers various design options available to customers and partners, including the latest MetroCluster FlexPod design to support a VMware Metro Storage Cluster (vMSC) configuration.
Haseeb Niazi is a technical marketing engineer in the Data Center Group specializing in security and data center technologies. His areas of expertise also include VPN and security, the Cisco Nexus product line, and FlexPod. Prior to joining the Data Center Group, he worked as a technical leader in the Solution Development Unit and as a solutions architect in Advanced Services. Haseeb holds a master of science degree in computer engineering from the University of Southern California. He’s CCIE certified (number 7848) and has 14 years of industry experience.
Chris O'Brien is a technical marketing manager with Cisco’s Computing Systems Product Group. He is currently focused on developing infrastructure best practices and solutions that are designed, tested, and documented to facilitate and improve customer deployments. Previously, O'Brien was an application developer and has worked in the IT industry for more than 20 years.
Remember to use the rating system to let Haseeb and Chris know if you have received an adequate response.
Because of the volume expected during this event, Haseeb and Chris might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, subcommunity Unified Computing shortly after the event. This event lasts through September 27, 2013. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.
Webcast related links:
can you please demistify how many "flavors" of Flexpod solutions we have today? Initially we were talking about UCS B series chassis with B200 and B250 blades plus Nexus 5500 and NetApp FAS3210. This evolved later but now I'm not sure anymore what Nexus devices are officially supported, what equipment is part of Flexpod Express (is that still official name?).
P.S. Any chance you can post a link with your webcast delivered yesterday?
The slides for the webinar can be found at:
The webinar recording will be posted in a week or so. We covered various models of the FlexPod in the slides.
At a high level, you can use Nexus 5000 or 7000 as a switch and can use B-Series or C-Series (managed through UCSM). You can use FAS 2200, 3200 or 6200 series. Since the Pod is "Flex(ible)", you have all these options. The validated options are documented in the CVDs and were covered in the slides presented at the webina (link above).
FlexPod Express consists of FAS 2200 series, C-Series servers and Nexus 3K switch.
Hope that helps
Let me try to push this a little bit further: I recently noticed that flexpod is now divided into three groups. I'm talking about FlexPod Datacenter, FlexPod Select and FlexPod Express. You already clarified what equipment is included in FlexPod Express but I'm not sure what is the difference between FlexPod Datacenter and FlexPod Select. Seems to me that easiest approach will be to start with hardware differences.
The FlexPod Select solution focuses on a specific (select) workload. We are currently targeting high-performance applications like Hadoop. As such, the FlexPod Select solutions with Cloudera CDH or Hortonworks HDP uses hardware optimized to deliver for each of these application environments. When compared to the FlexPod DC, Nexus, UCS, and NetApp FAS are consistent across offerings but Select may introduce additional NetApp or Cisco technologies to optimize that workload. In the case of FlexPod Select with Hadoop, the solution uses the NetApp E-Series to address the data intensive nataure of Hadoop workloads.
I have another one. Can you please compare Clustered ONTAP with vMSC? I think I'm familiar with clustered ONTAP and it seems to me that vMSC is competitive feature.
BTW, what switches are officially supported for clustered ONTAP feature (I'm talking about switch that is used exclusively with Clustered ONTAP)? I noticed you have Nexus 5596 in your slides and some Netapp guys are talking about CN1610 ?!?
Let me start with the switches that support Clustered ONTAP. As you can see in the slides, three models of Nesux 5000 5010, 5020 and 5596 are supported. We validated our solutions with 5596 because we wanted the ability to increase the number of nodes to maximum in future. Depending on your scale, you can choose any of these model. The actual configuration on these switches is mendated by NetApp where they provide a configuration file based on the switch model. NetApp also OEMs a switch (CN16xx) which can be used in place of Nexus 5000. This switch is a non-Cisco platform which NetApp certifies in the Clustered ONTAP for cost conscious customers.
While at a high level both Clustered ONTAP and vMSC/MetroCluster might seem to serve the same functionality i.e. disaster avoidance, these technologies operate at differnet layers. Clustered ONTAP provides customers ability to increase the scale and performance within a Data Center by combining various HA pairs and presenting them as one system. MetroCluster on the other hand allows you the ability to distribute two HA nodes across geographically distributed Data Centers. MetroCluster introduces MDS switches as well as Fibre-Bridges to achieve this long distance segregation.
Currently MetroCluster is supported on an HA pair running in 7-Mode but in future, MetroCluster functionality will incorporate Clustered ONTAP and you will be able to distribute your cluster across sites.
FlexPod supports multi-hypervisors as well as bare-metal servers. We have validated and documented both VMware and Hyper-V. Bare-metal install of Windows as well as RHEL is also covered in our deployment guides.
You can find the related CVDs on design zone:
Could you please explain what must be configured on Fabric Interconnect so we can connect (NetApp) storage directly, without Nexus and explain what is the advantage of using Nexus in the middle. Is there any CVD explaining that topology?
In addition, it will be nice to know why FI is working by default in NPV mode and not in switched mode (i.e. why NPV is preffered)?
And the last question from me, do you know why NPIV is not enabled by default on Nexus switch, i.e. what is the drawback of having this feature turned on?
I would suggest you read this white paper which details the pros and cons of direct connect storage.
http://www.cisco.com/en/US/partner/prod/collateral/ps10265/ps10276/whitepaper_c11-702584.html This paper captures all the major design points for Ethernet and FC protocols.
I would only add that in FlexPod we are trying to create a highly available solution and "flexible" solution; Nexus switching helps us deliver on both with vPC and unified ports.
NPV equats to end-host mode which allows the system to present all of the servers as N ports to the external fabric. In this mode, the vHBAs are pinned to the egress interfaces of the fabric interconnects. This pinning removes the potential of loops in the SAN fabric. Host based multipathing of the vHBAs account for potential uplink failures. The NPV mode (end-host mode) simplifies the attachment of UCS into the SAN fabric and that is why it is in NPV mode by default.
So for your last question, I will have to put my Product Manager hat on so bear with me. First off there is no drawback to enabling the NPIV feature (none that I am aware of) the Nexus 5000 platform simply offers you a choice to design and support multiple FC initiators (N-Ports) per F-Port via NPIV. This allows for the integration of the FI end-host mode described above. I imagine being a unfied access layer switch, the Nexus team enabled standard Fibre Channel switching capability and features first. The implementatin of NPIV is a customer choice based on their specific access layer requirements.
A upstream Switch connected to the Cisco Flexpod Interconnect was reloaded to apply new code. This reloading caused 10 Critical errors. There are references "NO LINK BETWEEN IOM PORT 1/2/1,2,3,4 AND FABRIC INTERCONNECT B:1/1,2,3,4." Additonally, I am getting
Retrieving user privileges ... Please wait...
System may not be ready ... assigning default role.
while getting ssh into the Cisco UCS 6200.
Could any one help how to resolve the issue ?
i am sanan ,my customer gave me a design that - They need 48port Cisco switch can work as gigabit utp and 2 fiber interface 1 GB - And compatible Transceiver GBIC for this switch 1 GB - Core switch work as 10GB / 1GB also - And compatible Transceiver GBIC for this switch 1 GB or 10 Gb.
they design from their side which is below and it is a new office also .
5qty of ws-c2960-48pst-l which they want to connect to n2k-c2232pp-10ge .please advise on that . or please suggest me the suitable one .
This session is for FlexPod related questions and unfortunately none of our FlexPod designs include 2900 series switches or Campus designs. To answer part of your query, you can potentially connect your N2K to a Nexus 5548UP using enough uplinks to take care of oversubscription. N5K unified ports support both 1G/10G. I feel your query about tranceivers can be handled by the sales/support team. Please also consider posting this question on Campus Switching aliases because they can point out gotchas and short-coming of the design.