I have a CCM 8.6 cluster, I can't pull logs via RTMT from the any server, I get the following message when try to run remote browse (or collect files or realtime trace)
"" Could not connect to server <hostname> . Please check and correct following:
a. If IP address used, please check there is NAT in between RTMT and CallManager.
If NAT used goto CallManager administrator page configure hostname.
Define correct Ip address in host table of the machine running RTMT or local DNS server.
b.If hostname used in CCM Administration page
then correct ipaddress should be added to hosttable of the machine running RTMT or local DNS server.
c. If server is not down and can be pinged. ""
RTMT installed on the server AW which in the same subnet, as all CCM servers.
In hosts table in AW there are all addresses of CCM servers and their hostnames.
the mistake doesn't depend on, whether authorization to the ip-address or by hostname was made
I am able to connect via RTMT and see performance stats (Virtual
Memory, CPU etc...) I just can't access trace files.
I'm having the same problem here, everything on the same LAN, I can monitor the server on the RTMT, ssh to the server but i can't see the syslog or collect traces for one specific server.
Do you know where can I find the RTMT application logs (or enable).
Did you find a solution for your problem?
Just bumped into this issue as well... Unity Conn 8.6... So far no resolution. Tried restarting AMC Services and rebooting... still no luck.
Did you ever find a resolution?
That was the problem. Found that customer had moved DNS servers to new IP's; changed the settings for primary and secondary DNS and the issue was resolved.
While looking at this problem, I also found that I received 401 authorization failed errors when trying to launch port status monitor in RTMT on the secondary unity connection server while logged to the primary.
So apparently DNS must be operational for port status monitor when you try to use it on the non-logged into unity connection.
In my case, the concerned server had recently been added to the cluster. The solution was to restart the trace collection service on all nodes.
I came across the solution via documentation bug CSCvd10033