This is a strange one and I'm not even sure how to troubleshoot it. Running 11.6 UCCE and I have some very long speech/DTMF inputs I need to accept. The interdigit timeout seem to work in the beginning of the form, but seem to get shorter and shorter as you get towards 8-10 digits.
This may be a wacky suggestion, but do you see the same results if you:
- do one test where you enter (for instance) 1, wait 3 seconds, 1, wait 3 seconds, 1, wait 3 seconds, etc. until you get to an entry of 11111111
- vs. doing the same wait and enter process but using 12345678.
Basically, does the number you enter make any difference?I remember hearing about a bug a while back that somehow was tied to the value being passed/entered.
Similarly, is it once you get to a total length of entry (like if you've been in that element for a total of 30 seconds that it starts acting weird)?
If you set it to DTMF only does the problem still happen?
Seems to be more timebased for the whole input than anything else. Doesn't matter if it's a single digit or random digits. How I'm testing is exactly how you said it. Digit 1,2,3 Digit. The 1-3 count is much quicker then 3 seconds and I still only get around to the 6th digit before I timeout. So it seems to always timeout around 15-18s.
There's a vxml property that comes into play even when collecting input w/Nuance named maxspeechtimeout - it times the entire entry (whether it's dtmf or speech) and has a 10s default timeout.
I know it's named max speech timeout, but when working with Nuance to collect dtmf, it's also being used for the dtmf input.
You can Name: maxspeechtimeout Value: 20s (remember the 's' else it'll default to milliseconds) as a VXML Property - either in the Settings tab of your form element (to set it for this one specific element) or in the root document (to set it as the default for every collection in your app).
Let us know if this fixes the problem.
fyi, I'm not positive what the 'default' size of the maxspeechtimeout is on nuance. It might be 10s or 15s or 20s. Or it might be configured in your Nuance baseline.xml
Here's the default setting from Baseline.xml: 22 seconds.
<!-- Amount of speech, in milliseconds, that the
recognizer/endpointer sees before forcing end of speech. -->
<declaration group="recognizer+ep" type="int" set_by="default+api+grammar" broker="max_value">
Unfortunately, it didn't. Always seems to timeout after 12 seconds or so. No matter what settings I use. At this point I'm fairly confident it's a bug in the GW as testing this in my lab with a different IOS version works just fine. Off to TAC, will update with the results.
Thanks all for the help.
I've been bounced around TAC and Nuance, but we're back at TAC and I think we have definite proof that it's the ios GW that is the problem. I did learn a bit about troubleshooting the communication between the voice browser and Nuance and documented it here so far https://dmacias.org/cvp-and-nuance-input-troubleshooting/
Once there's a resolution I'll update the world.
Just heard back from Cisco. VXML GW doesn't support the maxspeechtimeout,
but it is supported in VVB which I can confirm.
And neither does any version of VVB. This is an unfortunate piece of information as moving to VVB is not small task for my customer.
Hope this helps others to not break their heads trying to get this to work.
So just to clarify, was this for DTMF input using the Nuance GW adapter in Studio? Or Cisco DTMF-GW?
What did you find was the default maxspeechtimeout on IOS VXML GW?
We're using VoiceXML 2.1 with Nuance 10 for the GW in Studio.
I didn't find a maxspeechtimeout property being created by the GW, which is the core of the issue. So in this case the timeout ended up being set by Nuance. Out of the box it's 10s and we set it to 22s by changing NSserver.cfg:
server.mrcp2.osrspeechrecog.mrcpdefaults.recognition-timeout VXIInteger 22000
Did I understand your question correctly?