cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
12500
Views
40
Helpful
17
Replies

Fabric Interconnect inventory is not complete

brandon.keep
Level 1
Level 1

Hi,

I have recently started seeing this message within our UCS enviorment.

XXXXXX00-A# show fault
Severity  Code     Last Transition Time     ID       Description
--------- -------- ------------------------ -------- -----------
Major     F0885    2015-11-01T20:48:03.980   2489810 Fabric Interconnect B inventory is not complete card-inventory,eth-pc-inventory,eth-port-inventory,fc-pc-inventory,fc-port-inventory,mgmt-port-inventory,remote-eth-port-inventory,switch-fru,switch-inventory

I was wondering if anyone has any ideas on how to clear this error or where to start looking for clues as to its origin? Searching Google has not yeilded anything fruitful. I do not see any issues in operations that I can tell as it seems everything is working okay on both FI's. Is there maybe a way to force another inventory check or something like it?

Thanks for any information.

Regards,

Brandon

1 Accepted Solution

Accepted Solutions

Niko Nikas
Cisco Employee
Cisco Employee

Brandon,

This looks similar to CSCuw36128.

https://tools.cisco.com/bugsearch/bug/CSCuw36128/?reffering_site=dumpcr

View solution in original post

17 Replies 17

Niko Nikas
Cisco Employee
Cisco Employee

Brandon,

This looks similar to CSCuw36128.

https://tools.cisco.com/bugsearch/bug/CSCuw36128/?reffering_site=dumpcr

Thank You for the reply. Thois does look like the issue so I have a reboot scheduled

Hi Brandon,

Quick check, how did you fix the issue for the bug?Based on the workaround

its mention will need to restart UCSM processes to recover statsAG mts buffer space.

How do we restart this process?Appreciate if you can provide me the step.

Ajay,

You can restart the FI or you can run a "pmon stop" and "pmon start" from the local-mgmt CLI.

This should be done in a maintenance window. Let me know if you have more questions.

--Wes

thanks for the prompt response Wesley. I will perform the recommended action and will let you know the outcome

Hi Guys,

I have run "show pmon state" on both FI's but i don't find any errors.

There was no errors found on the "show fault" output.

I have done a cross check on the UCSM and there is no alerts too.

However, SMTP is throwing alerts stating both Fabric Interconnect (A&B) inventory is not complete card-inventory,switch-fru with error code : F0885.

Not too sure if stop and starting pmon state will be worthwhile.

Hi Qiese,

Quick check, you mention on option 2

4) This should drop your SSH session. Reconnect and connect back to local-mgmt. Do a show pmon state to verify everything is running <<<<<<

Lets say I have stopped pmon on FI-B, how long will it take to re-connect back?
Assuming I should be able to reconnect to it remotely, correct me if I am wrong.

Hey Ajay,

If you look at the first line in the defect you will notice:

*No statsAG core/restart*

So the pmon state outputs may be clean, however to ultimately clear the fault, you will need to restart pmon.

If you SSH directly to the Fabric interconnects management IP, you can stop and start pmon without having to disconnect the SSH session.

Let me know what questions you have.

--Wes

Hi Wesley,

Let me try the recommended step. Will keep you posted.

Cheers

As Wes Austin stated that is the correct behavior. keep us updated.

Hi Wesley,

The issue is resolve now after stopping and starting the pmon service.

Thanks for your advice.

Regards,

Anand

For bug CSCuw36128.
There is currently no target date on a fix.

To clear the errors you can try the following steps in a maintenance window.

1) Reboot secondary Fabric interconnect
2) Verify all paths are up and tested in hosts.
3) ssh to VIP of the cluster.
4) connect local-mgmt
5) "cluster lead a or b" whichever is secondary now. This will cause your session to drop. Reconnect after about 30 seconds.
6) ssh to VIP , connect local-mgmt, show cluster state and verify HA is ready, please make sure.
7) Reboot the other FI.
8. Please give enough time, and verify the status of both FIs.

You can Also do the following if you do not want to reboot the FI's.

1) ssh to FI-A and another ssh session to FI-B
2) connect local-mgmt A on the A side and connect local-mgmt B on the B side
3) pmon stop / pmon start on the secondary side.
4) This should drop your SSH session. Reconnect and connect back to local-mgmt. Do a show pmon state to verify everything is running

5) cluster lead a or b whichever is secondary. From the primary connection
6) pmon stop / pmon start on the old primary
7) This should drop your SSH session. Reconnect and connect back to local-mgmt. Do a show pmon state to verify everything is running

thanks for the prompt response Qiese. I will perform the recommended action and will let you know the outcome

Was going to follow these instructions (pmon stop/start) this AM on my 2.2(3.b) which has this exact issue, and I get a password prompt when issuing the pmon commands.  Never seen that before and can't find ANY doc on it.  My admin account has full rights and I ever tried the actual 'admin' login, no change.

Any points? Sitting at the local-mgmt # prompt. Thanks!

Lee,

Would it be possible to attach a screenshot here of the prompt? 

Or perhaps just paste the output?

I haven't seen any prompts when logging in as the default/local 'admin' user.

Are you trying to kill the processes on the subordinate?

--

Niko

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card