cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
cancel
3067
Views
15
Helpful
5
Replies

CDR not showing calls for certain users

Paul Austin
Level 4
Level 4

Hi All, CUCM version 11.5.1.13900-52 and looking at the CDR records, I can see that I am missing calls from certain users. I can see the call come in through the CUBES with the required fields but nothing shows in CDR. I have checked that the service is running. I can see records of calls with similar number format so I can discount any filtering that may be around. 

Any Ideas as I'm really mystified why?

Thanks

Paul

1 Accepted Solution

Accepted Solutions

Paul Austin
Level 4
Level 4

For information, it seems to be just one UCM subscriber that has problems. All services have been restarted with no sucess, now TAC are advising a cluster restart and that its best policy to do a cluster restart evert 100 days or so - that is a 1st and a concerning one.

 

View solution in original post

5 Replies 5

Jonathan Unger
Level 7
Level 7

Hi Paul,

 

A few points:

  1. Does it appear to be all calls from certain users? Or some calls from a group of users? If it is the latter are you confident that the calls are actually being picked up? By default, CUCM does not log CDR records for calls which are "Zero Duration".

  2. I know that you mentioned that you are confident in your filtering, but you could also download a full CDR export (CDR > Export CDR/CMR"  for a date period  which you know contains a call involving one of these users which was picked up.

  3. Any correlation to which CUCM server the users are registered to? Or does it seem to be random?

Hi Jonathan, At the moment we are just noticing it on one user, however it doesn't mean to say we are not missing others - all the tests we have done from other users are recorded so that get me thinking about the call parameters and whether something like privacy may be a factor but the CLI is shown on the inbound call.

A full CDR download doesn't show this particular caller.

 

Well the call comes in through a CUBE - RP- CUC - RP - Attendant console so it is common for all users.

 

Will see what TAC says and update.

Thanks All.

Paul Austin
Level 4
Level 4

For information, it seems to be just one UCM subscriber that has problems. All services have been restarted with no sucess, now TAC are advising a cluster restart and that its best policy to do a cluster restart evert 100 days or so - that is a 1st and a concerning one.

 

Thanks for the response, interesting to see that note by TAC.

 

Their response has triggered me, so warning RANT INCOMING

 

Having to reboot the CUCM cluster every 100 days as a best practice? If that is truly best practice, then I think it would be reasonable for TAC to file a documentation defect so that it is represented as such in the documentation. Now I haven't been working with CUCM that long, only 8 years , and have opened dozens of TAC cases but never heard that statement before (at least with CUCM).

 

For what it may, or may not be worth... I have found that when working with TAC you need to actively push for clear answers on complicated issues. Otherwise they may turn to a cop out answer, like "reboot the cluster". If you have an environment where you can do that without causing a headache great, you can go for it and see if it works. But without identifying the true root cause, the issue could return. To be clear as well I am not bashing TAC at all, they have saved my behind countless times over the years.

 

/EndRant

 

TLDR:

A scheduled reboot cycle is a poor workaround to address the symptoms of a problem, it is not addressing the root cause which is really what we need to get to.

 

Best of luck with this one! I hope you are able to get some real answers out of TAC!

 

 

 

 

While I agree restarting the cluster every so many days is a hack, it does get it working. Now, when there is any issue with CUCM, jabber, CUC etc, I just start with turning it off and on again (restarting) and it is faster than opening a support case. In all of time times I pursued the issue, it's a memory leak and the fastest way to fix it is a restart.