On the management front we have two new things to talk about:
1) Freeing the server administrators from the tyranny of sheet metal. UCS manager delivers total administrative parity across server form factors, and now supports connectivity for greater quantities of C-Series racks in a UCS system. When you get right down to it, servers are just different combinations of processing, memory, local disk and I/O capability. Some combinations happen to be best as blades, some happen to be best as rack mounts, but we shouldn’t have to care about the shape of the sheet metal when it comes to systems management. With UCS you don’t. Rack and blade all show up together as resources available and managed in a unified, self-integrating system, complete with an XML API. Unified management in UCS lets us finally think outside the box when we deploy and manage compute infrastructure.
2) Multi-UCS Manager: this might be the most important part of this announcement because it takes UCS well over the horizon in terms of scalability. Multi-UCS Manager, as the name implies, is the capability to manage across multiple instances of UCS. This allows for synchronization of service profiles, common pools of unique identifiers and centralized visibility and control across many thousands of servers. Multi-UCS Manager takes the underlying policy based management philosophy of UCS and literally globalizes it, with the capability to manage UCS instances within a single data or around the world. Scheduled for availability in 2HCY12, this is big news and there will be more to come on this topic.
New UCS I/O components:
1) Last year we introduced the 6248 Fabric Interconnect, with unified ports, 40% latency reduction and increased system bandwidth. Here comes its big brother, the 6296, weighing in at 2U, 96 ports, sub-2µs latency and a whopping 2Tb of switching capacity. That means more flexibility and capacity in an architecture that puts all the servers in the system one network hop away from each other, be they blades or racks.
2) A new I/O module for the UCS blade chassis, the 2204XP. This fabric extender doubles the amount of bandwidth that can be provisioned to each chassis to 160Gb.
3) Finally, but probably the most exciting for the server geeks among us: the VIC1240. This is the Cisco Virtual Interface Card now embedded in the new B200 M3 blade server. The VIC 1240 is a dual 20Gb LOM with high performance virtualization that comes standard. An expander module can double the trouble to 4x20Gb. By my math that’s 80Gb to a single slot blade: so how do you use it all? With Adapter-FEX technology, the VIC can carve that pipe into 256 vNICs or vHBAs that can be presented to a bare metal OS. VM-FEX technology takes it a step further, allowing those virtual adapters to be connected directly with virtual machines. The VIC can also be configured to bypass hypervisor switching which offloads that work from your processors and reduces proc utilization up to 30%. Moving virtual switching to the VIC also improves throughput by up to 10% and improves application performance by up to 15%. The idea here is to bring virtual I/O to near-bare metal levels and allow more applications to be virtualized -- which means greater operational agility and service resiliency.
Don’t forget the servers! By the end of this year we’ll have roughly doubled the number of servers in the UCS portfolio. Here’s how we’re kicking things off:
1) Two new rack servers: the C220 M3 and C240 M3. It’s best to compare at the specs here on the product pages, because these are feature loaded and my fingers are tired. They are of course based on Intel’s screaming hot new Xeon E5-2600 processor family, which was announced on Tuesday. We like to say Cisco and Intel are joined at the chip, after all. In addition to bringing new horse power and efficiency gains, the key differentiator for these machines is that they can be managed right alongside B-Series blades in one big happy pool of abstracted server resources, by UCS Manager.
2) The B200 M3. One of the upshots of the UCS architecture is that we’ve pulled all the switches and systems management modules out of the blade chassis. This leaves more room, power and cold air for computing, which manifests itself here in a single-slot blade with 24 DIMM slots and up to three quarter terabytes of RAM. Server architecture, much like life, though, is all about balance. That’s where the Xeon E5-2600 processors and the aforementioned VIC1240 (80Gb of I/O!) come in. The B200 M3 brings an industry leading set of capability to this class of blade and is a fantastic add to the UCS family.
One of the best things about UCS is forward and backward compatibility: all generations of product are fully interoperable which yields strong investment protection. Modular yet unified. The Zen of computing architecture, if you will. In fact, we’re putting a stake in the ground: the dramatically simplified blade chassis Cisco introduced to the industry 2009 will take customers through the end of this decade. Good through 2020…you heard it here first. Just think how young Paul will still look in this video by then
My colleagues will post today to talk about how all of this nets out in application performance, and it’s a very good story indeed. In the meantime we’ve posted up some easy to read performance briefs. Also, don’t forget that we have a “view 3D model” link right under the product pictures for all these new additions. If you want to take a close look that’s a fun way to do it. Thanks for coming along.
Hi, I have a customer requirement to run Vxlan between 2 DCs, but without the clos (spine/leaf) architecture. The reason is primarily cost, so they didn't invest in a bunch of switches with SDN. The use case is just vm-mobility across the DCs.Just want to...
Hey Folks,I have an issue where I would need to remove one VLAN from a VLAN Pool which is used by two productive VMM Domains.. don't ask me why..I already checked that the desired VLAN is NOT used by any existing Port-Group right now, so that seems nice.T...
The other day I had an ASIC failure that took out some ports. One of these ports just happened to be the interface used for VPC Keep Alive Link between two Nexus switches and my vpc topology became unstable. Due to the latter, it got me thinking about cre...
I am planning to upgrade my Nexus 7009 and when I did the "show impact", it asked me to run "show lacp issu-impact" which gave me a few ports that are not "ISSU ready". I researched and found that even though they don't have "lacp fast rate" enabled, thei...
Hello! My first post.Yes, everything seems to be working, but I have a few questions:I have:2 NXOS 3172PQ (7.0 (3) I7 (8)) and1 WS-C3750E-48TD-E (universalk9-mz.152-4.E10) VPC member 1- When I finished the settings, the SVI at Core_48 were not UP in ...