06-11-2017 11:44 PM - edited 03-01-2019 05:15 AM
I have issue about not register apic1 and apic2, and i can not access apic from gui.
I only access apic from cli/ssh with user : rescue-user
what problem ? any clue how to fix
thanks
06-12-2017 10:18 AM
Hello Achmadfarisy,
There are a few points for clarification given your messages above. I would like to start by stating it may be easier to open an SR to track and understand the behavior in question.
1. What version are both APICs running?
2. What do you see in the console output of either APIC when logging in as 'admin'?
3. Have we attempted reloading the APICs to see if this alleviates any of the above conditions?
4. Once provisioned, APIC 1 should be accessible via admin credentials even without other nodes in the fabric. APIC 2 will only become accessible once it has joined the fabric successfully.
-Gabriel
06-12-2017 08:18 PM
1. What version are both APICs running?
2.2.(1n)
2. What do you see in the console output of either APIC when logging in as 'admin'?
I only access remote cli ssh with user rescue-user and password admin existing
3. Have we attempted reloading the APICs to see if this alleviates any of the above conditions?
yes I have reload apic but not change problem.
maybe this log help :
APIC1 =
JKTTBSAPC01# tail -n 25 /var/log/dme/log/svc_ifc_appliancedirector.bin.warnplus.log
5019||17-06-12 21:32:30.792+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2e7:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-18/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.793+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2e8:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-22/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.793+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2e9:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-17/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.793+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2ea:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContResolve : END-POINT UNAVAILABLE Dn0=svccont-6-23/rcont-polresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.793+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2eb:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-19/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.794+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2ec:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-16/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.794+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2ed:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-11/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.794+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2ee:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-30/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.794+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2ef:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-15/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.795+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2f0:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-10/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.795+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2f1:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-29/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.795+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2f2:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-14/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.795+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2f3:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-9/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.796+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2f4:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-28/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.796+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2f5:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-32/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.796+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2f6:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-13/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.796+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2f7:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-31/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.797+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2f8:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-12/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.797+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2f9:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-9-32/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:30.797+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d2fa:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-9-31/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5019||17-06-12 21:32:33.292+07:00||manager||WARN||co=repl:255:127:0xff0000000000d2fc:6||ShardId : 0 ReplicaId: 0 not found on the appliance. Dropping the stimulus||../common/src/framework/./core/shard/Manager.cc||574 bico 30.797
5019||17-06-12 21:32:38.545+07:00||manager||WARN||co=repl:255:127:0xff0000000000d305:6||ShardId : 0 ReplicaId: 0 not found on the appliance. Dropping the stimulus||../common/src/framework/./core/shard/Manager.cc||574 bico 33.292
5117||17-06-12 21:32:42.783+07:00||ifm||WARN||to=ifc_policymgr:2:0:6:0,co=ifm||message delivery returned a fatal error (envelope 0x500000000339a, remote=129 [no replica is available])||../common/src/framework/./core/proc/StimulusMessagingInterface.cc||982 bico 38.545
5019||17-06-12 21:32:42.783+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d309:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:appliancedirector:PkiFabricNodeSSLCertificatePushIFMSSLCertsFromAD : END-POINT UNAVAILABLE Dn0=uni/fabsslcomm/ifmcertnode-1, ||../common/src/framework/./core/error/Report.cc||136
5021||17-06-12 21:32:43.793+07:00||manager||WARN||co=repl:255:127:0xff0000000000d30a:6||ShardId : 0 ReplicaId: 0 not found on the appliance. Dropping the stimulus||../common/src/framework/./core/shard/Manager.cc||574 bico 42.783
JKTTBSAPC01#
APIC2 =
JKTTBSAPC02# tail -n 25 /var/log/dme/log/svc_ifc_appliancedirector.bin.warnplus.log
4971||17-06-12 21:32:20.391+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4ee:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-4/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.391+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4ef:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-3/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.392+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4f0:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-2/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.392+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4f1:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContResolve : END-POINT UNAVAILABLE Dn0=svccont-6-23/rcont-polresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.392+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4f2:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-29/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.392+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4f3:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-10/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.393+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4f4:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-30/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.393+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4f5:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-11/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.393+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4f6:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-31/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.393+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4f7:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-12/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.394+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4f8:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-32/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.394+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4f9:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-13/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.394+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4fa:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-14/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.394+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4fb:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-15/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.395+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4fc:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-16/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.395+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4fd:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-17/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.395+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4fe:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-18/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.395+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a4ff:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-19/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.396+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a500:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-21/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.396+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a501:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-9-32/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
4971||17-06-12 21:32:20.396+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a502:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-9-30/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
5023||17-06-12 21:32:38.388+07:00||ifm||WARN||to=ifc_policymgr:2:0:6:0,co=ifm||message delivery returned a fatal error (envelope 0x5000000003c57, remote=129 [no replica is available])||../common/src/framework/./core/proc/StimulusMessagingInterface.cc||982 bico 20.396
4972||17-06-12 21:32:38.389+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a514:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:appliancedirector:PkiFabricNodeSSLCertificatePushIFMSSLCertsFromAD : END-POINT UNAVAILABLE Dn0=uni/fabsslcomm/ifmcertnode-2, ||../common/src/framework/./core/error/Report.cc||136
5023||17-06-12 21:33:59.405+07:00||ifm||WARN||to=ifc_policymgr:2:0:6:0,co=ifm||message delivery returned a fatal error (envelope 0x5000000003c76, remote=129 [no replica is available])||../common/src/framework/./core/proc/StimulusMessagingInterface.cc||982 bico 38.389
4969||17-06-12 21:33:59.405+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000a555:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:appliancedirector:PkiFabricNodeSSLCertificatePushIFMSSLCertsFromAD : END-POINT UNAVAILABLE Dn0=uni/fabsslcomm/ifmcertnode-2, ||../common/src/framework/./core/error/Report.cc||136
JKTTBSAPC02#
APIC3 =
JKTTBSAPC03# tail -n 25 /var/log/dme/log/svc_ifc_appliancedirector.bin.warnplus.log
23888||17-06-12 21:35:21.195+07:00||ifm||WARN||to=ifc_topomgr:2:0:9:0,co=ifm||message delivery returned a fatal error (envelope 0x7000000002bd7, remote=129 [no replica is available])||../common/src/framework/./core/proc/StimulusMessagingInterface.cc||982
23888||17-06-12 21:35:21.195+07:00||ifm||WARN||to=ifc_topomgr:2:0:9:0,co=ifm||message delivery returned a fatal error (envelope 0x7000000002bdb, remote=129 [no replica is available])||../common/src/framework/./core/proc/StimulusMessagingInterface.cc||982
23804||17-06-12 21:35:21.195+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d703:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-24/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.195+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d704:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-5/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.195+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d705:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnTargetCreate : END-POINT UNAVAILABLE Dn0=svccont-6-24/rcont-targetcreate-18374686479671623725, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.195+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d706:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-18/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.195+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d707:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-22/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.196+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d708:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-17/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.196+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d709:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContResolve : END-POINT UNAVAILABLE Dn0=svccont-6-23/rcont-polresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.196+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d70a:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-19/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.196+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d70b:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-16/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.197+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d70c:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-11/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.197+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d70d:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-30/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.197+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d70e:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-15/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.197+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d70f:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-10/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.197+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d710:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-29/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.198+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d711:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-14/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.198+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d712:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-9/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.198+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d713:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-28/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.198+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d714:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-32/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.198+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d715:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-13/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.199+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d716:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-31/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.199+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d717:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-6-12/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.199+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d718:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-9-31/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
23804||17-06-12 21:35:21.199+07:00||exception_handling||ERROR||co=doer:255:127:0xff0000000000d719:1||ERROR[3|0] ../common/src/framework/./core/meta/Task.cc(1183):asyncFailed: (Dn0) : STAGE ifc:RelnRelTaskContRelnResolve : END-POINT UNAVAILABLE Dn0=svccont-9-30/rcont-relnresolve-18374686479671623682, ||../common/src/framework/./core/error/Report.cc||136
JKTTBSAPC03#
APIC1 =
JKTTBSAPC01# acidiag avread
Local appliance ID=1 ADDRESS=10.0.0.1 TEP ADDRESS=10.0.0.0/16 CHASSIS_ID=fe1e6044-b5cd-11e6-b644-6f1993457e9c
Cluster of 3 lm(t):1(2017-06-12T07:55:20.869+07:00) appliances (out of targeted 3 lm(t):3(2017-06-12T09:00:18.645+07:00)) with FABRIC_DOMAIN name=JKTTBS-FABRIC set to version=apic-2.2(1n) lm(t):1(2017-06-12T02:45:54.801+07:00); discoveryMode=PERMISSIVE lm(t):0(1970-01-01T07:00:00.003+07:00)
appliance id=1 address=10.0.0.1 lm(t):1(2017-06-12T14:55:24.326+07:00) tep address=10.0.0.0/16 lm(t):1(2017-06-12T14:55:24.326+07:00) oob address=123.231.137.253/25 lm(t):1(2017-06-12T07:55:21.116+07:00) version=2.2(1n) lm(t):1(2017-06-12T09:00:20.260+07:00) chassisId=fe1e6044-b5cd-11e6-b644-6f1993457e9c lm(t):1(2017-06-12T09:00:20.260+07:00) capabilities=0X2FFFFFFFF--0X2020--0X5 lm(t):1(2017-06-12T07:55:21.198+07:00) rK=(stable,present,0X207373642D687373) lm(t):1(2017-06-12T07:55:21.121+07:00) aK=(stable,present,0X207373642D687373) lm(t):1(2017-06-12T07:55:21.121+07:00) cntrlSbst=(APPROVED, FCH2034V0TF) lm(t):1(zeroTime) (targetMbSn= lm(t):0(zeroTime), failoverStatus=0 lm(t):0(zeroTime)) podId=1 lm(t):1(2017-06-12T14:55:24.326+07:00) commissioned=YES lm(t):1(zeroTime) registered=YES lm(t):1(2017-06-12T14:55:24.326+07:00) standby=NO lm(t):1(2017-06-12T14:55:24.326+07:00) active=YES(2017-06-12T14:55:24.326+07:00) health=(applnc:112 lm(t):1(2017-06-12T07:55:21.445+07:00) svc's[3]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[6]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[9]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[10]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[11]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[14]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[16]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[22]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[23]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[34]:1 lm(t):1(2017-06-12T07:55:21.116+07:00))
appliance id=2 address=0.0.0.0 lm(t):2(2017-06-12T03:14:55.569+07:00) tep address=0.0.0.0 lm(t):0(zeroTime) oob address=172.22.2.250/24 lm(t):1(2017-06-12T02:44:18.264+07:00) version= lm(t):0(zeroTime) chassisId= lm(t):0(zeroTime) capabilities=0XFFFFFFF--0X2020--0 lm(t):0(zeroTime) rK=(stable,absent,0) lm(t):0(zeroTime) aK=(stable,absent,0) lm(t):0(zeroTime) cntrlSbst=(ERASED, ) lm(t):1(2017-06-12T03:14:55.570+07:00) (targetMbSn= lm(t):0(zeroTime), failoverStatus=0 lm(t):0(zeroTime)) podId=0 lm(t):0(zeroTime) commissioned=NO lm(t):1(2017-06-12T03:14:55.569+07:00) registered=NO lm(t):1(2017-06-12T07:55:20.869+07:00) standby=NO lm(t):0(zeroTime) active=NO(2017-06-12T07:55:20.870+07:00) health=(applnc:2 lm(t):1(2017-06-12T07:55:20.870+07:00))
appliance id=3 address=10.0.0.3 lm(t):3(2017-06-12T09:43:58.516+07:00) tep address=10.0.0.0/16 lm(t):3(2017-06-12T09:43:58.516+07:00) oob address=10.24.17.62/24 lm(t):1(2017-06-12T09:00:21.270+07:00) version=2.2(1n) lm(t):3(2017-06-12T09:00:20.319+07:00) chassisId=86183072-c56b-11e6-b094-29ba7b18b134 lm(t):3(2017-06-12T09:00:20.319+07:00) capabilities=0X2FFFFFFFF--0X2020--0X4 lm(t):3(2017-06-12T04:59:46.242+07:00) rK=(stable,present,0X207373642D687373) lm(t):1(2017-06-12T09:00:21.270+07:00) aK=(stable,present,0X207373642D687373) lm(t):1(2017-06-12T09:00:21.270+07:00) cntrlSbst=(APPROVED, FCH2034V0SS) lm(t):0(zeroTime) (targetMbSn= lm(t):0(zeroTime), failoverStatus=0 lm(t):0(zeroTime)) podId=1 lm(t):3(2017-06-12T09:43:58.516+07:00) commissioned=YES lm(t):1(2017-06-12T07:55:20.869+07:00) registered=YES lm(t):1(2017-06-12T04:58:15.589+07:00) standby=NO lm(t):3(2017-06-12T09:43:58.516+07:00) active=YES(2017-06-12T09:00:20.045+07:00) health=(applnc:112 lm(t):3(2017-06-12T09:00:18.815+07:00) svc's[3]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[6]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[9]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[10]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[11]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[14]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[16]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[22]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[23]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[34]:1 lm(t):3(2017-06-12T09:00:18.657+07:00))
---------------------------------------------
clusterTime=<diff=-25208067 common=2017-06-12T14:36:25.412+07:00 local=2017-06-12T21:36:33.479+07:00 pF=<displForm=0 offsSt=0 offsVlu=25200 lm(t):3(2017-06-12T09:00:18.831+07:00)>>
---------------------------------------------
JKTTBSAPC01#
APIC2 =
JKTTBSAPC02# acidiag avread
Local appliance ID=2 ADDRESS=10.0.0.2 TEP ADDRESS=10.0.0.0/16 CHASSIS_ID=ef9a2372-b5ce-11e6-af80-0d2d8a160b4f
Cluster of 3 lm(t):2(2017-06-12T05:36:08.193+07:00) appliances (out of targeted 3 lm(t):2(2017-06-12T05:36:08.193+07:00)) with FABRIC_DOMAIN name=JKTTBS-FABRIC set to version=apic-2.2(1n) lm(t):1(2017-06-12T02:45:54.801+07:00); discoveryMode=PERMISSIVE lm(t):0(1970-01-01T07:00:00.002+07:00)
appliance id=1 address=10.0.0.1 lm(t):2(2017-06-12T06:42:48.977+07:00) tep address=10.0.0.0/16 lm(t):1(2016-11-29T07:52:00.830+07:00) oob address=123.231.137.253/25 lm(t):2(2017-06-12T02:44:18.250+07:00) version=2.2(1n) lm(t):1(2017-06-12T02:44:18.243+07:00) chassisId=fe1e6044-b5cd-11e6-b644-6f1993457e9c lm(t):2(2017-06-12T06:42:48.977+07:00) capabilities=0X2FFFFFFFF--0X2020--0 lm(t):1(2017-06-12T02:50:01.425+07:00) rK=(stable,absent,0) lm(t):0(zeroTime) aK=(stable,absent,0) lm(t):0(zeroTime) cntrlSbst=(APPROVED, FCH2034V0TF) lm(t):106(2017-06-12T02:20:51.531+07:00) (targetMbSn= lm(t):0(zeroTime), failoverStatus=0 lm(t):0(zeroTime)) podId=1 lm(t):106(2017-06-12T02:20:51.531+07:00) commissioned=YES lm(t):2(zeroTime) registered=YES lm(t):3(2017-06-12T02:20:50.917+07:00) standby=NO lm(t):0(zeroTime) active=NO(2017-06-12T05:36:08.193+07:00) health=(applnc:2 lm(t):2(2017-06-12T05:36:08.193+07:00))
appliance id=2 address=10.0.0.2 lm(t):2(2017-06-12T12:36:12.404+07:00) tep address=10.0.0.0/16 lm(t):2(2017-06-12T12:36:12.404+07:00) oob address=172.22.2.250/24 lm(t):2(2017-06-12T05:36:08.248+07:00) version=2.2(1n) lm(t):2(2017-06-12T05:36:08.971+07:00) chassisId=ef9a2372-b5ce-11e6-af80-0d2d8a160b4f lm(t):2(2017-06-12T05:36:08.971+07:00) capabilities=0X2FFFFFFFF--0X2020--0X7 lm(t):2(2017-06-12T05:36:08.971+07:00) rK=(stable,present,0X207373642D687373) lm(t):2(2017-06-12T05:36:08.252+07:00) aK=(stable,present,0X207373642D687373) lm(t):2(2017-06-12T05:36:08.252+07:00) cntrlSbst=(APPROVED, FCH2037V2SU) lm(t):2(zeroTime) (targetMbSn= lm(t):0(zeroTime), failoverStatus=0 lm(t):0(zeroTime)) podId=1 lm(t):2(2017-06-12T12:36:12.404+07:00) commissioned=YES lm(t):2(zeroTime) registered=YES lm(t):2(2017-06-12T12:36:12.404+07:00) standby=NO lm(t):2(2017-06-12T12:36:12.404+07:00) active=YES(2017-06-12T12:36:12.404+07:00) health=(applnc:112 lm(t):2(2017-06-12T05:36:09.963+07:00) svc's[3]:1 lm(t):2(2017-06-12T05:36:08.248+07:00)[6]:1 lm(t):2(2017-06-12T05:36:08.248+07:00)[9]:1 lm(t):2(2017-06-12T05:36:08.248+07:00)[10]:1 lm(t):2(2017-06-12T05:36:08.248+07:00)[11]:1 lm(t):2(2017-06-12T05:36:08.248+07:00)[14]:1 lm(t):2(2017-06-12T05:36:08.248+07:00)[16]:1 lm(t):2(2017-06-12T05:36:08.248+07:00)[22]:1 lm(t):2(2017-06-12T05:36:08.248+07:00)[23]:1 lm(t):2(2017-06-12T05:36:08.248+07:00)[34]:1 lm(t):2(2017-06-12T05:36:08.248+07:00))
appliance id=3 address=10.0.0.3 lm(t):3(2017-06-12T09:43:58.516+07:00) tep address=10.0.0.0/16 lm(t):3(2017-06-12T09:43:58.516+07:00) oob address=10.24.17.62/24 lm(t):2(2017-06-12T02:44:18.253+07:00) version=2.2(1n) lm(t):3(2017-06-12T02:55:28.699+07:00) chassisId=86183072-c56b-11e6-b094-29ba7b18b134 lm(t):2(2017-06-12T06:42:48.977+07:00) capabilities=0X2FFFFFFFF--0X2020--0X4 lm(t):3(2017-06-12T02:49:07.482+07:00) rK=(stable,absent,0) lm(t):0(zeroTime) aK=(stable,absent,0) lm(t):0(zeroTime) cntrlSbst=(APPROVED, FCH2034V0SS) lm(t):0(zeroTime) (targetMbSn= lm(t):0(zeroTime), failoverStatus=0 lm(t):0(zeroTime)) podId=1 lm(t):3(2017-06-12T09:43:58.516+07:00) commissioned=YES lm(t):2(2017-06-12T05:36:08.193+07:00) registered=YES lm(t):1(2017-06-12T02:44:17.874+07:00) standby=NO lm(t):0(zeroTime) active=NO(2017-06-12T05:36:08.193+07:00) health=(applnc:2 lm(t):2(2017-06-12T05:36:08.193+07:00))
---------------------------------------------
clusterTime=<diff=-25208067 common=2017-06-12T14:38:03.503+07:00 local=2017-06-12T21:38:11.570+07:00 pF=<displForm=0 offsSt=0 offsVlu=25200 lm(t):2(2017-06-12T05:36:08.977+07:00)>>
---------------------------------------------
JKTTBSAPC02#
APIC3 =
JKTTBSAPC03# acidiag avread
Local appliance ID=3 ADDRESS=10.0.0.3 TEP ADDRESS=10.0.0.0/16 CHASSIS_ID=86183072-c56b-11e6-b094-29ba7b18b134
Cluster of 3 lm(t):3(2017-06-12T02:44:17.994+07:00) appliances (out of targeted 3 lm(t):3(2017-06-12T09:00:18.645+07:00)) with FABRIC_DOMAIN name=JKTTBS-FABRIC set to version=apic-2.2(1n) lm(t):1(2017-06-12T02:45:54.801+07:00); discoveryMode=PERMISSIVE lm(t):0(1970-01-01T07:00:00.003+07:00)
appliance id=1 address=10.0.0.1 lm(t):1(2017-06-12T14:55:24.326+07:00) tep address=10.0.0.0/16 lm(t):1(2017-01-31T15:17:04.045+07:00) oob address=123.231.137.253/25 lm(t):3(2017-06-12T09:00:21.266+07:00) version=2.2(1n) lm(t):1(2017-06-12T09:00:20.260+07:00) chassisId=fe1e6044-b5cd-11e6-b644-6f1993457e9c lm(t):1(2017-06-12T09:00:20.260+07:00) capabilities=0X2FFFFFFFF--0X2020--0X1 lm(t):1(2017-06-12T14:55:24.326+07:00) rK=(stable,present,0X207373642D687373) lm(t):3(2017-06-12T09:00:21.266+07:00) aK=(stable,present,0X207373642D687373) lm(t):3(2017-06-12T09:00:21.266+07:00) cntrlSbst=(APPROVED, FCH2034V0TF) lm(t):1(zeroTime) (targetMbSn= lm(t):0(zeroTime), failoverStatus=0 lm(t):0(zeroTime)) podId=1 lm(t):1(2017-06-12T14:55:24.326+07:00) commissioned=YES lm(t):3(2017-06-12T04:59:45.757+07:00) registered=YES lm(t):3(2017-06-12T04:59:45.842+07:00) standby=NO lm(t):1(2017-06-12T14:55:24.326+07:00) active=YES(2017-06-12T09:00:20.077+07:00) health=(applnc:112 lm(t):1(2017-06-12T07:55:21.445+07:00) svc's[3]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[6]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[9]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[10]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[11]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[14]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[16]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[22]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[23]:1 lm(t):1(2017-06-12T07:55:21.116+07:00)[34]:1 lm(t):1(2017-06-12T07:55:21.116+07:00))
appliance id=2 address=0.0.0.0 lm(t):2(2017-06-12T03:14:55.569+07:00) tep address=0.0.0.0 lm(t):0(zeroTime) oob address=172.22.2.250/24 lm(t):3(2017-06-12T02:44:18.253+07:00) version= lm(t):0(zeroTime) chassisId= lm(t):0(zeroTime) capabilities=0XFFFFFFF--0X2020--0 lm(t):0(zeroTime) rK=(stable,absent,0) lm(t):0(zeroTime) aK=(stable,absent,0) lm(t):0(zeroTime) cntrlSbst=(ERASED, ) lm(t):3(2017-06-12T03:13:58.464+07:00) (targetMbSn= lm(t):0(zeroTime), failoverStatus=0 lm(t):0(zeroTime)) podId=0 lm(t):0(zeroTime) commissioned=NO lm(t):1(2017-06-12T03:14:55.569+07:00) registered=NO lm(t):0(zeroTime) standby=NO lm(t):0(zeroTime) active=NO(2017-06-12T09:00:18.645+07:00) health=(applnc:2 lm(t):3(2017-06-12T09:00:18.645+07:00))
appliance id=3 address=10.0.0.3 lm(t):3(2017-06-12T09:43:58.516+07:00) tep address=10.0.0.0/16 lm(t):3(2017-06-12T09:43:58.516+07:00) oob address=10.24.17.62/24 lm(t):3(2017-06-12T09:00:18.657+07:00) version=2.2(1n) lm(t):3(2017-06-12T09:00:20.319+07:00) chassisId=86183072-c56b-11e6-b094-29ba7b18b134 lm(t):3(2017-06-12T09:00:20.319+07:00) capabilities=0X2FFFFFFFF--0X2020--0X5 lm(t):3(2017-06-12T04:59:46.242+07:00) rK=(stable,present,0X207373642D687373) lm(t):3(2017-06-12T09:00:18.663+07:00) aK=(stable,present,0X207373642D687373) lm(t):3(2017-06-12T09:00:18.663+07:00) cntrlSbst=(APPROVED, FCH2034V0SS) lm(t):3(zeroTime) (targetMbSn= lm(t):0(zeroTime), failoverStatus=0 lm(t):0(zeroTime)) podId=1 lm(t):3(2017-06-12T09:43:58.516+07:00) commissioned=YES lm(t):3(zeroTime) registered=YES lm(t):3(2017-06-12T09:43:58.516+07:00) standby=NO lm(t):3(2017-06-12T09:43:58.516+07:00) active=YES(2017-06-12T09:43:58.516+07:00) health=(applnc:112 lm(t):3(2017-06-12T09:00:18.815+07:00) svc's[3]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[6]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[9]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[10]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[11]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[14]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[16]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[22]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[23]:1 lm(t):3(2017-06-12T09:00:18.657+07:00)[34]:1 lm(t):3(2017-06-12T09:00:18.657+07:00))
---------------------------------------------
clusterTime=<diff=-25208064 common=2017-06-12T14:40:15.030+07:00 local=2017-06-12T21:40:23.094+07:00 pF=<displForm=0 offsSt=0 offsVlu=25200 lm(t):3(2017-06-12T09:00:18.831+07:00)>>
---------------------------------------------
APIC1 =
JKTTBSAPC01# acidiag rvread
\- unexpected state; /-unexpected mutator;
s-> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32lcl
r->123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123lcl
1
2
3\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\
4
5
6\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\
7
8
9\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\
10\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\
11\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\
12
13
14\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\
15
16\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\
17
18
19
20
21
22\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\
23\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\
24
25
26
27
28
29
30
31
32
33
34\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\\\X\\XX\\\X\\X\X\\
Some replicas are not in expected states and are mutated by unexpected apic's
---------------------------------------------
clusterTime=<diff=-25208068 common=2017-06-12T14:42:29.990+07:00 local=2017-06-12T21:42:38.058+07:00 pF=<displForm=0 offsSt=0 offsVlu=25200 lm(t):3(2017-06-12T09:00:18.831+07:00)>>
APIC2 =
JKTTBSAPC02# acidiag rvread
\- unexpected state; /-unexpected mutator;
s-> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32lcl
r->123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123lcl
1
2
3 \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \
4
5
6 \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \\ \ \ \\ \ \ \
7
8
9 \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \
10 \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \
11 \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \
12
13
14 \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \
15
16 \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \
17
18
19
20
21
22 \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \
23 \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \
24
25
26
27
28
29
30
31
32
33
34 \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \ \ \\ \ \ \
Some replicas are in not expected states
---------------------------------------------
clusterTime=<diff=-25208067 common=2017-06-12T14:43:06.206+07:00 local=2017-06-12T21:43:14.273+07:00 pF=<displForm=0 offsSt=0 offsVlu=25200 lm(t):2(2017-06-12T05:36:08.977+07:00)>>
APIC3 =
JKTTBSAPC03# acidiag rvread
\- unexpected state; /-unexpected mutator;
s-> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32lcl
r->123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123lcl
1
2
3\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
4
5
6\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
7
8
9\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
10\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
11\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
12
13
14\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
15
16\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
17
18
19
20
21
22\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
23\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
24
25
26
27
28
29
30
31
32
33
34\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Some replicas are in not expected states
---------------------------------------------
clusterTime=<diff=-25208064 common=2017-06-12T14:43:43.390+07:00 local=2017-06-12T21:43:51.454+07:00 pF=<displForm=0 offsSt=0 offsVlu=25200 lm(t):3(2017-06-12T09:00:18.831+07:00)>>
06-13-2017 09:19 AM
Achmadfarisy,
This is a newly provisioned APIC cluster? I have some questions around the history of the APICs.
Some Observations:
1. APICs 1 and 3 are clustered but report that 2 is non existent. Was it decommissioned at some point? Or more specifically, what steps were taken thus far that is showing all 3 apics in this state.
2. APIC 2 is thinking it is in the fabric with APICs 1 and 3. This tells me that at some point it would have joined successfully. Again with question 1, what exact steps were taken prior to getting into this state.
3. A variety of shards/replicas are in a bad state. Was some upgrade performed prior to clustering the APICS, or were they all clean/first time installed on this version, then clustering was attempted?
Depending on whether or not you have any config worth saving (or ideally a config export/snapshot),it may be quicker to rebuild the fabric and import your configuration.
If you are keen on troubleshooting how this happened, I would definitely recommend a TAC case once you are able to provide a list of steps taken to get into this state.
-Gabriel
06-14-2017 01:01 AM
"This is a newly provisioned APIC cluster? "
No...
Some Observations:
"1. APICs 1 and 3 are clustered but report that 2 is non existent. Was it decommissioned at some point? "
yes, Apic3 have decommisioned to apic2
Depending on whether or not you have any config worth saving (or ideally a config export/snapshot),it may be quicker to rebuild the fabric and import your configuration.
if we have snapshot, it can rebuild fabric to normally and not impact traffic on leaf ?
or maybe you have step how to rebulid from snapshot file
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide