Cisco Unified Computing System (UCS) works as a single, cohesive system, to unify compute, network and storage technologies. The UCS system consists of various parts that work together and give a view of single system. These parts of a UCS system are:
UCS 6100 series Fabric Interconnects
A UCS system consists of one or two UCS 6100 series switches or fabric interconnects. A UCS system with two fabric interconnects is typically deployed in active-active pair that supports failover. These fabric interconnects are based on the same switching technology as the Cisco Nexus 5000 Series Switches; however Nexus 5000 series switches can not work as UCS 6100 series fabric interconnects and vice versa since UCS 6100 series provides additional features and management capabilities. In these fabric interconnects out-of-band management, including switch redundancy, is supported through dedicated management and clustering ports. These fabric interconnects also feature embedded UCS manager which is used to manage the entire UCS system. They also support Cisco VN-Link architecture which is required for policy-based virtual machine connectivity.
UCS 2100 series Fabric Extenders
The UCS 2100 series fabric extender removes the need for independently managed blade chassis resident switches. It enables the UCS 6100 fabric interconnect to provide all access-layer switching needs for the connected servers. The fabric extender multiplexes and forwards all traffic using a cut-through architecture over one to four 10-Gbps unified fabric connections to the fabric interconnect. This traffic is then switched to its required destination by the fabric interconnect and no switching, whatsoever, is done by the fabric extender. Fabric extenders are physically a part of the UCS 5100 series blade chassis; however logically they acts as distributed line card and are thus a part of UCS 6100 series fabric interconnect. Each blade chassis should have one or two fabric extenders for connectivity to the fabric interconnects. In a correct setup a fabric extender is connected to only one fabric interconnect. The other fabric extender in the blade chassis should be connected to the other fabric interconnect.
UCS 5100 series Blade Chassis
The UCS 5100 series blade chassis are designed to house UCS blade servers. The UCS blade chassis in 6RU in size and is 32 inches deep. These blade chassis are logically a part of UCS 6100 series fabric interconnect and thus they provide a single coherent management domain. The UCS 5108 blade chassis is designed to support blades of two form factors: half width and full width. The blade chassis also houses four power-supply bays with power entry in the rear, and redundant-capable, hot-swappable power supply units accessible from the front panel. It also has eight hot-swappable fan trays and two fabric extender slots.
UCS B-Series Blade Servers
The UCS B-Series blade servers are based on x86 architecture and support Microsoft Windows, Red hat Linux, variants of UNIX, and virtualization software like VMware ESX. The UCS B200 M1 Blade Server is half-width which uses two Intel Xeon 5500 series processors and supports up to 96 GB of DDR3 RAM. The half width server supports a single mezz card and two optional hot-swappable SAS disk drives. The blades also feature the blade service processor or the BMC and support management using IPMI.
UCS Network Adapters
The UCS network adapter cards carry out all the network I/O from the blades and are build on mezzanine card form factor. UCS system has two different types of network adapters: UCS 82598KR-CI 10 Gigabit Ethernet Adapter and UCS M71KR-E/ UCS M71KR-Q Converged network adapter. These adapters have two unified fabric connections to each of the two fabric extender slots. The UCS 82598KR-CI adapter presents a view of two 10 Gigabit Ethernet NICs connected to the system. The UCS M71KR-E/ UCS M71KR-Q adapter is for organizations who require fibre channel compatibility and it presents a view of two Ethernet NICs and two HBAs.
We have configured VRRP between two ASR9k as you can see in the picture below. One is MASTER second router is backup. We have a laptop connected to Leaf-C1. The laptop runs the ping command to VIP, then we reload Leaf-C-3 which is connected to Master. Pin...
Hey Folks, is there anybody using port-profiles in Cisco Nexus switches? Or how do you manage ports like ESXi server ports with the same vlans on your Nexus Switches? I use it and managed it by DCNM Lan Classic. Now, I updated DCNM to 11.5 and L...
Dear all,When it configures like the following, it can verify without statistics per-entry however I can not check the relationship betweenthe configured ACL and Physical interface, if the show hardware access-list vlan [vlan-id] input statistics, it show...
I should qualify and say - trying to talk. I have a small lab with a couple of servers, a couple of NAS devices, a switch a router... all the toys to make for a pleasant evening of testing. I have sub interfaces on my router and traffic is getti...
I am running DCNM 11.4 and have configured the Administration --> Performance Setup --> Threshholds as well as Administration --> Performance Setup --> SAN Collections and I get no host port performance when I look in Monitor -->...