CUCM System version: 184.108.40.20600-12, cluster of 2 servers
BPS has been restarted several times, but jobs are not running, all jobs show pending. 'Stop Processing' is not enabled.
I do not see any matching bug for this cucm version. Since you have already tried restarting BPS service, can you try deleting all the jobs listed under job scheduler and then try a new BAT job. If it still does not work please collect BPS traces, they might throw up something that may help.
Just for the knowledge base, are you selecting option "Run immediately" while running BAT?
I just took the following actions:
-deleted all jobs from the job scheduler (8:46am server time)
-restarted the BPS service on the publisher (8:47am server time)
-submitted a new job for a End User export of ~800 users as an immediate job 1392216489 (8:48am server time)
attaching trace from BPS service but I am unsure that the trace is configured properly because I could not find the BPS service in the Trace Configuration on Serviceability.
I cant get much out of the debug.
To get more info Go to the Serviceability Page:
Trace ---> Configure ----> Select Publisher -----> Database and Admin Services ----> Cisco Bulk Provisioning Service
Set the Debug Trace Level to Debug ( If left at this level, it might have a negative effect on the publisher)
And then repost the BPS logs.
IF helpful, please let.
Not much more here, but for the sake of argument....
I performed a similar exercise.
-deleted original job
-Created New job 1392220758
captured BPS log
Opened TAC case and the message
2014-02-12 10:00:26,721 ERROR [Thread-0] ncs.NcsClient$ReceiveThread - java.net.SocketException: Connection timed out
..indicates that something has locked up the port and is blocking access, recommended reboot of publisher.
thank you to all who tried to help.
We had the same issue with our cucm 8.6. Restarting the BPS Service temporarly solved the problem, we tried to restart the cucm but that didnt help. As a last mesure we delete over 700 previous jobs with a restart of the BPS service, after that everything worked well. And we have not had the same problem yet.