cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1599
Views
15
Helpful
9
Replies

Abilities/Capabilities of the UCCX doing internal data parsing/manipulating/ordering

nanosynth
Level 1
Level 1

I am on a fact finding mission here. Sometimes I think I am like the only one who does what I do with the UCCX (but I know that's not true), I just dont see much written (only stuff about call centers/CSR agents) than about the things I try to do  with it. I coded my UCCX to fetch medical claims from a multitude of different back end commercial databases and the data is stuffed into the UCCX either as a 2 mile long delimited string or XML documents structured off of whatever I have my PHP send it. I come from the CXP / Edify IVR world and those IVR's are made for this sort of stuff. So right now a caller may have 400+ medical claims returned to them on a single SSN# query. I tell them how many claims were returned and have them narrow down the search by date of birth or date of service or both, or, they can sit and listen to all 400+ claims one by one. So on the future searches say by DOB, I then have the UCCX do additional web calls to the back end for these types of searches using SSN# and DOB and TAX ID number if its a doctors office. Do I have, somewhere in the UCCX, the ability to still work off those initial 400+ claims that are in the UCCX already and search and sort and order internally within the UCCX as if it was a database in itself? How much further can I go with this? I read things about simple 'holiday schedule' XML searches or 'a list of a few "skilled" agents' in the UCCX but that doesn't cut it here. Does anyone have experience in this and can tell me what I should be studying to do what I want to do, or is the label 'call center in a box' pretty much that and not 'UCCX SQL server in a box'. 

1 Accepted Solution

Accepted Solutions

I agree with David. Separate your IVR from your Data Model. Think MVC framework here. Just make your PHP server solution do the caching and be faster. UCCX only has like 768MB of Java Heap to work with for scripting in the entire server, so storing 400 records in memory or in the document repository is a bad idea. Also, I would recommend not retrieving a mile long comma separated result set, because if you cause a Heap Dump, you will core the CCX Engine and crash the server. Instead, have your PHP server return the result count as an integer.

View solution in original post

9 Replies 9

Very interesting problem and it seems like you have the chops for a different solution than what you're currently doing. So first, you could do what you're talking about by making creating XML files for each SSN or unique identifier then reference back to this "DB" by ssn.xml. However, I think that's going to be very painful and limiting. How I would solve this is by creating a middleware which does all the heavy lifting for your IVR. When you get the initial request send the middleware the SSN. The middleware then handles retrieving 400 claims, then automatically sorts them by date, in last 7 days, 30 days etc. Next IVR has a date, you pass SSN and date to the middleware and it returns the specifics IDs of the claims you have in order to be able to read through them in the IVR. From my experience, handling something like that is never best done in the IVR as you will run into memory and performance issues if for example you have 100 calls each with 400 claims all call in at the same time.

 

david

I agree with David. Separate your IVR from your Data Model. Think MVC framework here. Just make your PHP server solution do the caching and be faster. UCCX only has like 768MB of Java Heap to work with for scripting in the entire server, so storing 400 records in memory or in the document repository is a bad idea. Also, I would recommend not retrieving a mile long comma separated result set, because if you cause a Heap Dump, you will core the CCX Engine and crash the server. Instead, have your PHP server return the result count as an integer.

Wow, I NEVER would have even thought about that whole 'Java Heap' business, very interesting. What you two tell me makes total sense. Although I am just starting out on our "IVR collapsing' project, (getting rid of CXP and Edify) and porting everything over to UCCX, and I only have two customers so far moved over, I do have 125+ other clients to move over as well, so I better sharpen up my PHP skills to get this done via PHP and just feed the UCCX basic, already fetched and parsed and narrowed down information, that is no longer 2 miles long,, I'll make it a city block long....lol

Although Anthony and David actually DO know what they are talking about, I just had to find a way to do this in the UCCX and I did. Now lets see if I can crash the UCCX. 

First the UCCX does a data fetch of 300+ XML records and shove them into the FullEligDOC variable

Take 300 of these <PERSON> elements between the main ELIGIBILITY top and bottom elements that are stored in the FullEligDoc variable

 

<ELIGIBILITY>

<PERSON>

<name>Rex D VanHagar</name>

<last_name>VanHagar</last_name>

<first_name>Rex</first_name>

<dob>19700312</dob>

<from_date>20180101</from_date>

<thru_date>20201130</thru_date>

</PERSON>

<PERSON>

<name>Tico D VanHagar</name>

<last_name>Vanhagar</last_name>

<first_name>Tico</first_name>

<dob>20050520</dob>

<from_date>20180101</from_date>

<thru_date>20201130</thru_date>

</PERSON>

</ELIGIBILITY>

 

And pull out the section surrounding the DOB of 20050520 to look exactly like this and put into the NarrowDOC variable.

 

<PERSON>

<name>Tico D VanHagar</name>

<last_name>Vanhagar</last_name>

<first_name>Tico</first_name>

<dob>20050520</dob>

<from_date>20180101</from_date>

<thru_date>20201130</thru_date>

</PERSON>

 

 

To then further individually parse the NarrowDoc variable like:

 

fromDateString=Get XML Document Data (NarrowDOC, "/descendant::PERSON/child::from_date")

 

This is the Java code that does that. I need to incorporate variables in it rather than hardcode the dob, but that's no big deal. 

 

 

{
try {
javax.xml.parsers.DocumentBuilderFactory factory = javax.xml.parsers.DocumentBuilderFactory.newInstance();
javax.xml.parsers.DocumentBuilder builder = factory.newDocumentBuilder();
org.w3c.dom.Document domDoc = builder.parse(FullEligList);
javax.xml.xpath.XPath XPath = javax.xml.xpath.XPathFactory.newInstance().newXPath();
String XpathEval = "//PERSON[dob='20050520']/*";
org.w3c.dom.NodeList content = (org.w3c.dom.NodeList) XPath.compile(XpathEval).evaluate(domDoc, javax.xml.xpath.XPathConstants.NODESET);
java.lang.StringBuilder sb = new StringBuilder();
sb.append("<PERSON>");
int differentName;
for (differentName = 0; differentName < content.getLength() ; differentName++) {
sb.append("<"+content.item(differentName).getNodeName()+">");
sb.append(content.item(differentName).getTextContent());
sb.append("</"+content.item(differentName).getNodeName()+">");
}

sb.append("</PERSON>");

return sb.toString();
}
catch (java.lang.Exception e) {
e.printStackTrace();
return "";
}
}

 

 

 

 

 

 

 

 

 

+5 for posting the solution. Curious what this does to your memory at scale. if you can pump a few calls at a time it would be a cool exercise to see how much of a load this actually creates.

 

david

Hi David. I just got this working 2 days ago. I do not have it in real service yet for any actual clients of ours but I can certainly whip up a script to test it out using one of our existing clients database, and believe me, that's the DB that will easily give me 500 claims records in one web call, times me calling into the UCCX 3 times at once, at least. Would you be able to tell me exactly what to do/setup for whatever test results you would like to see? This would help me out immensely because I need to know for myself and for my company. Of course I will do all my tests in the middle of the night on a weekend. Maybe write me something whenever you have spare time.. Thanks! 

I would need to do some research on this on what counter to take a look at. I would start with looking at what RTMT provides to you in regards to health of your UCCX system. From there, take a baseline of what it looks like today then start making test calls and see if you see any major difference. I personally wouldn't never put into production something which had to do something so computationally heavy. Even if it can handle it, it just seems like toying with fire.

 

david

You are coming in crystal clear David, I totally agree with you. I just wanted to find a way to make it work, just because I am wired that way. I already have the PHP doing all the individual web calls to the back end, thus the data returns are one record at a time for the callers, times how ever many callers are called in at once. I did reach the 1000 step mark and the script quit working for that caller, so I increased it. That case was the caller had 300 or so records to his name in the database and he chose to listen to the results of each record one by one so when he was about 12 minutes into the one by one record recital, the script hit the 1000 step mark. I saw it in RTMT. I still want to take 'readings' of things under load though with what I figured out how to do.

It's really only the JVM Heap memory in RTMT, as it's around only 768MB (version dependent), and if it maxes out, the Engine restarts itself and all calls on CTI ports are dropped.