cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
280
Views
0
Helpful
1
Replies

KVM to Blades Works, KVM to Rack Mounts Does Not

Crank
Level 1
Level 1

Int for mgmt is x.x.225.0/24
InBand mgmt is x.x.226.0/24
UCS Manager 4.2

- The default IP pool ext.mgmt is not configured with any IP addresses.
- Pool KVM-InBand is configured.
- In LAN cloud > Global Policies, inband profile is configured to use the KVM-InBand pool, with correct network and VLAN group assignment.
- Blade servers pull an Inband IP address properly. Service profiles also pull from the KVM-pool properly as assigned through the service profile template.
- KVM console connects and displays as expected for blade servers.

- Rack mount servers pull an inband IP address properly from the KVM pool that was created and currently have no service profile associated
- Cannot ping or connect to the IP address that is displayed for the KVM connection for ANY rack mount server.

What configuration am I missing? The rack-mount servers connect via server ports on each FI. Their inband mgmt should be using the uplinks from the FI the same way as the blade servers, correct?

1 Accepted Solution

Accepted Solutions

Crank
Level 1
Level 1

Solved:

This was due to the cabling on the VIC-1467 and VIC-1455. With both adapters, we have a possible 8 ports total (4 each). On each card, ports 1 & 2 were cabled and 3 & 4 were getting cabled when we got more cables in. When running 2 connections, they had to be split on ports 1 & 3 with 1 going to fabric A and 3 going to fabric B. We had 1 & 2 going to fabric A on the mlom vic. Ports 1 & 2 on the PCIE vic (1455) was going to fabric B.

We eventually removed the PCIE vic completely since it kept causing issues. Now we are using the one MLOM (vic1467) with 4 connections. Ports 1 & 2 are going to FAB-A and 3 & 4 are going to FAB-B.

This resolved the issue.

 

View solution in original post

1 Reply 1

Crank
Level 1
Level 1

Solved:

This was due to the cabling on the VIC-1467 and VIC-1455. With both adapters, we have a possible 8 ports total (4 each). On each card, ports 1 & 2 were cabled and 3 & 4 were getting cabled when we got more cables in. When running 2 connections, they had to be split on ports 1 & 3 with 1 going to fabric A and 3 going to fabric B. We had 1 & 2 going to fabric A on the mlom vic. Ports 1 & 2 on the PCIE vic (1455) was going to fabric B.

We eventually removed the PCIE vic completely since it kept causing issues. Now we are using the one MLOM (vic1467) with 4 connections. Ports 1 & 2 are going to FAB-A and 3 & 4 are going to FAB-B.

This resolved the issue.

 

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card