cancel
Showing results for 
Search instead for 
Did you mean: 
cancel

Who Me Too'd this topic

%VOICE_IEC-3-GW: SIP: Internal Error (PRACK wait timeout) & (1xx wait timeout)

crazycatman
Level 1
Level 1

Hi folks,

Have a situation where I'm seeing a lot of spamming of the logs with 2 particular errors:

Jul 24 17:57:57.290 PHT: %VOICE_IEC-3-GW: SIP: Internal Error (PRACK wait timeout): IEC=1.1.129.7.62.0 on callID 18043 GUID=D21758D48E5E11E8BFA4811988141A24
Jul 24 17:57:59.214 PHT: %VOICE_IEC-3-GW: SIP: Internal Error (1xx wait timeout): IEC=1.1.129.7.59.0 on callID 18050 GUID=E0A0CB1E8E5E11E8B755D64093392022

Also this:

SIP: (29214) Group (a= group line) attribute, level 65535 instance 1 not found.

The errors are seen heavily on both MGW & CUBE - but I can see calls arriving at CUBE when debugging.

I'm unclear if the error messages that are seen are as a result of connectivity issues between the CUBE <> Sonus...or connectivity issues between the CUBE <> MGW - or something else.

Don't believe they're between MGW <> SIP upstream provider as we have TONS of other calls coming and going without this issue.

Our CUBE routers are 2911's running:

Cisco IOS Software, C2900 Software (C2900-UNIVERSALK9-M), Version 15.3(3)M5, RELEASE SOFTWARE (fc3) on one 
Cisco IOS Software, C2900 Software (C2900-UNIVERSALK9-M), Version 15.4(3)M1, RELEASE SOFTWARE (fc1) on the other.

Both are problematic.

 

We've been having intermittent issues with calls going in or out of this router (well, actually it's happening to two routers).

I couldn't, for the life of me, find any reference to other people's issues with this - hoping someone can shed light and point us in a general direction.

We have "resolved" the issue by reconfiguring the physical topology - so the call failure issue seems to be resolved but we still see these logs and I'm not feeling confident.

Background - when this router was originally setup - they used a single Gig interface for inside and outside of the SIP trunk. Furthermore - they'd tried to set up a port channel to connect it to two different ports on a stack switch. They were unsuccessful at getting this to work - so resolved to a single port - but they didn't remove the port channel config. 
Both inside and outside were still on a single port (not an issue I don't believe) with a subinterface in 2 different VLAN's (one for "inside" and one for "outside").

The "reconfigure" was to get two separate ports patched to the switch and remove any sign of port channel config.

I don't have access to the switch to further investigate (not our switch and we don't manage it) - but a bounce of it was out of the question due to services impacted. As such, I bounced both my routers and this "solved" the issue for about 5 minutes before going back to total failure.

I am suspecting an issue with the switch that had some bug after long-term use of the bizarre port channel config left behind - but will never know.

Was hoping someone knew of scenarios where these errors present generally.

 

This is the TranslatorX diagram of end-to-end SIP debug. My suspicions of this make me think there's a connectivity issue between the Media Gateway and the CUBE (in the diagram CUBE2) as MGW is sending 408 but I see nothing coming back from CUBE2 after it gets invite from Sonus. Note that none of the debugs I had were from Upstream provider or Sonus - I have only access to MGW and CUBE1 & CUBE2.

End system behind Sonus is Lync.

However, when using a different SIP provider that goes direct to the Sonus, no CUBE involved, the issue is not present. 

TranslatorX diagram.png

 

Really hoping someone can give me another avenue of thought on this issue.

Who Me Too'd this topic