On the management front we have two new things to talk about:
1) Freeing the server administrators from the tyranny of sheet metal. UCS manager delivers total administrative parity across server form factors, and now supports connectivity for greater quantities of C-Series racks in a UCS system. When you get right down to it, servers are just different combinations of processing, memory, local disk and I/O capability. Some combinations happen to be best as blades, some happen to be best as rack mounts, but we shouldn’t have to care about the shape of the sheet metal when it comes to systems management. With UCS you don’t. Rack and blade all show up together as resources available and managed in a unified, self-integrating system, complete with an XML API. Unified management in UCS lets us finally think outside the box when we deploy and manage compute infrastructure.
2) Multi-UCS Manager: this might be the most important part of this announcement because it takes UCS well over the horizon in terms of scalability. Multi-UCS Manager, as the name implies, is the capability to manage across multiple instances of UCS. This allows for synchronization of service profiles, common pools of unique identifiers and centralized visibility and control across many thousands of servers. Multi-UCS Manager takes the underlying policy based management philosophy of UCS and literally globalizes it, with the capability to manage UCS instances within a single data or around the world. Scheduled for availability in 2HCY12, this is big news and there will be more to come on this topic.
New UCS I/O components:
1) Last year we introduced the 6248 Fabric Interconnect, with unified ports, 40% latency reduction and increased system bandwidth. Here comes its big brother, the 6296, weighing in at 2U, 96 ports, sub-2µs latency and a whopping 2Tb of switching capacity. That means more flexibility and capacity in an architecture that puts all the servers in the system one network hop away from each other, be they blades or racks.
2) A new I/O module for the UCS blade chassis, the 2204XP. This fabric extender doubles the amount of bandwidth that can be provisioned to each chassis to 160Gb.
3) Finally, but probably the most exciting for the server geeks among us: the VIC1240. This is the Cisco Virtual Interface Card now embedded in the new B200 M3 blade server. The VIC 1240 is a dual 20Gb LOM with high performance virtualization that comes standard. An expander module can double the trouble to 4x20Gb. By my math that’s 80Gb to a single slot blade: so how do you use it all? With Adapter-FEX technology, the VIC can carve that pipe into 256 vNICs or vHBAs that can be presented to a bare metal OS. VM-FEX technology takes it a step further, allowing those virtual adapters to be connected directly with virtual machines. The VIC can also be configured to bypass hypervisor switching which offloads that work from your processors and reduces proc utilization up to 30%. Moving virtual switching to the VIC also improves throughput by up to 10% and improves application performance by up to 15%. The idea here is to bring virtual I/O to near-bare metal levels and allow more applications to be virtualized -- which means greater operational agility and service resiliency.
Don’t forget the servers! By the end of this year we’ll have roughly doubled the number of servers in the UCS portfolio. Here’s how we’re kicking things off:
1) Two new rack servers: the C220 M3 and C240 M3. It’s best to compare at the specs here on the product pages, because these are feature loaded and my fingers are tired. They are of course based on Intel’s screaming hot new Xeon E5-2600 processor family, which was announced on Tuesday. We like to say Cisco and Intel are joined at the chip, after all. In addition to bringing new horse power and efficiency gains, the key differentiator for these machines is that they can be managed right alongside B-Series blades in one big happy pool of abstracted server resources, by UCS Manager.
2) The B200 M3. One of the upshots of the UCS architecture is that we’ve pulled all the switches and systems management modules out of the blade chassis. This leaves more room, power and cold air for computing, which manifests itself here in a single-slot blade with 24 DIMM slots and up to three quarter terabytes of RAM. Server architecture, much like life, though, is all about balance. That’s where the Xeon E5-2600 processors and the aforementioned VIC1240 (80Gb of I/O!) come in. The B200 M3 brings an industry leading set of capability to this class of blade and is a fantastic add to the UCS family.
One of the best things about UCS is forward and backward compatibility: all generations of product are fully interoperable which yields strong investment protection. Modular yet unified. The Zen of computing architecture, if you will. In fact, we’re putting a stake in the ground: the dramatically simplified blade chassis Cisco introduced to the industry 2009 will take customers through the end of this decade. Good through 2020…you heard it here first. Just think how young Paul will still look in this video by then
My colleagues will post today to talk about how all of this nets out in application performance, and it’s a very good story indeed. In the meantime we’ve posted up some easy to read performance briefs. Also, don’t forget that we have a “view 3D model” link right under the product pictures for all these new additions. If you want to take a close look that’s a fun way to do it. Thanks for coming along.
Hello all,I have a quick question regarding BGP routing protocol. In our DataCenter we have Nexus7710 version 8.3(2), with N77-F348XP-23 line cards, and we are currently running OSPF. We would like to enable the BGP feature, in order to create BGP peering...
Design: ACI to physical Alteon LB connected over VPC VPC -Port-channel feature: PCP-ON Control : Fast select hot standby port, Graceful convergence, Suspend individual port Could you please let me know how we can avoid causing a MAC t...
Hi community,1. When using the topology with Cloud ACI using TGW to connect between infra and user VPC, does it mean the version of Cloud ACI has to be 5.x or later? Or does it mean the ACI On-premises it self has to be at version 5.x or later?My guess is...
Thanks for attending our ATXs sessions! Here’s the post-session resources for easy reference.
New to ATXs? An ATXs session, offered at no cost, is an hour of real-time learning led by Cisco experts, who will answer your technology questions through produ...