Cisco Unified Computing System (UCS) works as a single, cohesive system, to unify compute, network and storage technologies. The UCS system consists of various parts that work together and give a view of single system. These parts of a UCS system are:
UCS 6100 series Fabric Interconnects
A UCS system consists of one or two UCS 6100 series switches or fabric interconnects. A UCS system with two fabric interconnects is typically deployed in active-active pair that supports failover. These fabric interconnects are based on the same switching technology as the Cisco Nexus 5000 Series Switches; however Nexus 5000 series switches can not work as UCS 6100 series fabric interconnects and vice versa since UCS 6100 series provides additional features and management capabilities. In these fabric interconnects out-of-band management, including switch redundancy, is supported through dedicated management and clustering ports. These fabric interconnects also feature embedded UCS manager which is used to manage the entire UCS system. They also support Cisco VN-Link architecture which is required for policy-based virtual machine connectivity.
UCS 2100 series Fabric Extenders
The UCS 2100 series fabric extender removes the need for independently managed blade chassis resident switches. It enables the UCS 6100 fabric interconnect to provide all access-layer switching needs for the connected servers. The fabric extender multiplexes and forwards all traffic using a cut-through architecture over one to four 10-Gbps unified fabric connections to the fabric interconnect. This traffic is then switched to its required destination by the fabric interconnect and no switching, whatsoever, is done by the fabric extender. Fabric extenders are physically a part of the UCS 5100 series blade chassis; however logically they acts as distributed line card and are thus a part of UCS 6100 series fabric interconnect. Each blade chassis should have one or two fabric extenders for connectivity to the fabric interconnects. In a correct setup a fabric extender is connected to only one fabric interconnect. The other fabric extender in the blade chassis should be connected to the other fabric interconnect.
UCS 5100 series Blade Chassis
The UCS 5100 series blade chassis are designed to house UCS blade servers. The UCS blade chassis in 6RU in size and is 32 inches deep. These blade chassis are logically a part of UCS 6100 series fabric interconnect and thus they provide a single coherent management domain. The UCS 5108 blade chassis is designed to support blades of two form factors: half width and full width. The blade chassis also houses four power-supply bays with power entry in the rear, and redundant-capable, hot-swappable power supply units accessible from the front panel. It also has eight hot-swappable fan trays and two fabric extender slots.
UCS B-Series Blade Servers
The UCS B-Series blade servers are based on x86 architecture and support Microsoft Windows, Red hat Linux, variants of UNIX, and virtualization software like VMware ESX. The UCS B200 M1 Blade Server is half-width which uses two Intel Xeon 5500 series processors and supports up to 96 GB of DDR3 RAM. The half width server supports a single mezz card and two optional hot-swappable SAS disk drives. The blades also feature the blade service processor or the BMC and support management using IPMI.
UCS Network Adapters
The UCS network adapter cards carry out all the network I/O from the blades and are build on mezzanine card form factor. UCS system has two different types of network adapters: UCS 82598KR-CI 10 Gigabit Ethernet Adapter and UCS M71KR-E/ UCS M71KR-Q Converged network adapter. These adapters have two unified fabric connections to each of the two fabric extender slots. The UCS 82598KR-CI adapter presents a view of two 10 Gigabit Ethernet NICs connected to the system. The UCS M71KR-E/ UCS M71KR-Q adapter is for organizations who require fibre channel compatibility and it presents a view of two Ethernet NICs and two HBAs.
Hello,We have one fabric composed by only 1 MDS 9148 switches, and now, we want to add a new 9148 S MDS switch to this fabric.one VSAN (100) and one zone set activated on this switch.Now, we need to add a new switch in fabric. This new switch is also one ...
Hi All, Spine Leaf designs I have seen so far show a single link from Leaf to each Spine and the recommended solution to increase aggregate fabric throughput is to add another Spine. Is it a valid design to have multiple links from Leaf to Spine to a...
I found two different theory to handle l3 traffic for unknown IP in ACI one is stated below which is from White Paper 739989 and in other cisco live document I have seen that this will be treated via spine proxy method. "For (L3) routed traffic to an...
I found below note in cisco white paper i.e 739989 - end point learning but unable to understand it. "An exception exists for remote MAC address learning when a packet is incoming from L3Out to Cisco ACI. If ARP traffic is coming from an L3Out SVI ra...
Dears ,Please check as the ACI sandbox does not accept any simple action on it from yesterday ? even creation of a tenant returns "Error: 400 - the messaging layer was unable to deliver the stimulus (connection error, Connection refused)"