cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements
AMA event- Migrating Existing Networks to Cisco ACI
6135
Views
0
Helpful
9
Replies
Beginner

ACI Physical Interface Issue

It was time to re-architect and redesign our Data Center Infrastructure. We did the due-diligence, wrote RFPs, researched vendor solutions, attended seminars and researched options before making a decision. We decided on Cisco ACI as our SDN infrastructure. The switches and APICs arrived, they were racked and powered up. We "built" the fabric with the help of a consultant, did testing, had knowledge transfer and I attended an entry level ACI class. All prep work is done and now we are starting to migrate legacy infrastructure into the ACI Fabric. Let the fun begin.

 

We are a private enterprise and decided in our case 4 tenants made the most sense for us. We created the following 4 tenants:

  1. DEV (ACI Centric Mode)
  2. TEST (ACI Centric Mode)
  3. PROD (ACI Centric Mode)
  4. VLAN_100 (Network Centric Mode)

 

We created a Layer 2  Out VPC to our legacy core switches (2 - Nexus 9596 UPs) in order to put "production vlans" into the VLAN_100 Tenant for migration. We also created a Layer 3 Out VPC to be used connecting our legacy core switches for routing. In other words, all routing is done external of the ACI Fabric.

 

Networks services such as DNS, DHCP, NTP, Load Balancing, Fire Walls and others will initially be done in the legacy network. If ACI Tenants need those services they either route or us the Layer 2 Out VPC to access them.

 

Tenants will initially run in unenforced mode and contracts will be implemented and enforced gradually as the ACI Fabric grows.

 

Our Data Center is for all practical purposes 100% virtualized. We use VMware 6.0ESXi hosts and anxiously awaiting ACI support for 6.5. We will integrate the Vmware into ACI using VDS instead of Cisco VDS Switch.

 

I currently have added 12 hosts to the the leaf switches in ports 4 thru 19....Port 4 seems to be good but the others have a fault on interfaces 5 thru 19. Interface 4 has a Health Score of 100. The rest of the interfaces have a Health Score of 0. Attached are the faults on the other ports.

 

Any ideas of where to start looking?

 

Thanks

2 ACCEPTED SOLUTIONS

Accepted Solutions
Highlighted
Enthusiast

Do you have anything that

Do you have anything that shows (or can you describe) how you intend for each ESXi host to connect to the fabric?  Is each host connected with a single 10Gig uplink to node 101? or do some have redundant uplinks? etc..  

You have configured 16 interfaces for 12 hosts so maybe its a combination?

View solution in original post

Cisco Employee

Dohogue,

Dohogue,

As Claudia requested before, how are the access policies set for the interfaces in question? Are the Hosts in question VPC'd to two leaves in the ACi fabric? Are they 12 individual standalone hosts, each expecting to have their own VPC to two leaf nodes?

From two of your attached images, I can see that  both eth 1/4 and 1/5 appear to be using the same Port-channel ID (Po7). Is this expected given your desired topology?

If not, how many VPC policy groups are configured between the 12 hosts currently cabled into the fabric? Each VPC policy group is essentially how we assign the VPC ID, so if we expect each host to have it's own distinct VPC, we need to ensure that each host has its own distinct VPC Policy Group. If all 12 hosts are currently using the same VPC Policy group, they are being thrown into one massive VPC which may explain why the remaining interfaces have gone suspended.

-Gabriel

View solution in original post

9 REPLIES 9
Enthusiast

Can you share the access

Can you share the access policies you used to configure the interfaces?  Also, how did you "attach" them to your EPG constructs?

Beginner

E1/4 is up

E1/4 is up

E1/5 - E1/15 is down

Attached properties of e1/4 and E1/t......they look identical to me except for the Channeling State....to me this indicates the host NIC is not configured for LACP Active as it should be.....perhaps the Server Group needs to check their NIC configuration......All are ESXi hosts

Highlighted
Enthusiast

Do you have anything that

Do you have anything that shows (or can you describe) how you intend for each ESXi host to connect to the fabric?  Is each host connected with a single 10Gig uplink to node 101? or do some have redundant uplinks? etc..  

You have configured 16 interfaces for 12 hosts so maybe its a combination?

View solution in original post

Beginner

Each ESXi host will connect

Each ESXi host will connect to the fabric via a 10 Gb port channel. The host will connect to 1 port in leaf101 and 1 port in leaf102. The extra 4 hosts I configured for future use. Those host are not yet installed but will be. Only host 1 thru 12 are currently racked and ready for use.

Hope this clears things up.

Enthusiast

Hi dohogue,

Hi dohogue,

So the the next step is to look at your access policy constructs.  Lets skip pools, physical domains and AEPs for now, if you can describe the Interface policies (particularly the Port Channel one) , the interface policy group, the interface profile, and your switch policies, that would help confirm that the constructs are in line with your host connectivity plan.  Also, I just wanted to confirm that you defined a vcp domain for leafs 101 and 102?

Beginner

I am currently reviewing the

I am currently reviewing the objects you mentioned above, however something strange happened yesterday that may be a clue. As mentioned in the start of this discussion "..Port 4 seems to be good but the others have a fault on interfaces 5 thru 19. Interface 4 has a Health Score of 100. The rest of the interfaces have a Health Score of 0"

Yesterday the ESXi host on port 4 had to be rebooted. When that was done the ESXi host on port 5 became good with a Health Score of 100 while the host on port 4 and the others had a Health Score of 0.

I do have a VPC Domain for leafs 101 and 102.

Cisco Employee

Dohogue,

Dohogue,

As Claudia requested before, how are the access policies set for the interfaces in question? Are the Hosts in question VPC'd to two leaves in the ACi fabric? Are they 12 individual standalone hosts, each expecting to have their own VPC to two leaf nodes?

From two of your attached images, I can see that  both eth 1/4 and 1/5 appear to be using the same Port-channel ID (Po7). Is this expected given your desired topology?

If not, how many VPC policy groups are configured between the 12 hosts currently cabled into the fabric? Each VPC policy group is essentially how we assign the VPC ID, so if we expect each host to have it's own distinct VPC, we need to ensure that each host has its own distinct VPC Policy Group. If all 12 hosts are currently using the same VPC Policy group, they are being thrown into one massive VPC which may explain why the remaining interfaces have gone suspended.

-Gabriel

View solution in original post

Beginner

Gmonroy,

Gmonroy,

 For now we have no access policies in place. Yes each host is VPC'd to each leaf in the ACI Fabric. They are 12 individual standalone host and each should have it's own VPC to the leaf nodes.

Yes all host are using the same Port-Channel ID. I thought they could all share the same. Do they each need their own?

I only have 1 VPC Policy Group configured. Sounds like I may need a VPC Policy Group for each host. Each host should be treated as unique.

Am I correct with this?

Cisco Employee

Yes, that is correct

Yes, that is correct

A unique vPC interface is created by having unique interface policy groups. the Interface policy group is what defines the bundle. 

if you have 12 individual hosts, you should have 12 individual interface policy groups. name them something relevant to the server to facilitate troubleshooting later. 

you can use "show vpc ex" from the leaf CLI 

hope that helps!

CreatePlease to create content
Content for Community-Ad
August's Community Spotlight Awards