01-14-2013 02:01 PM
We have some dozen workflows (CPO 2.3 processes) each triggered to start by a unique tidal event. Each workflow when triggered fetches numeric data (just two or three two-digit data such as 15, 300) from a singluar web service end point and uses the same in its workflow logic. Data fetched by each workflow is usually specific to that workflow. We want the workflow to cache locally the data thus fetched. Reason for caching data locally in CPO: The rate of web service lookups is 100+ lookups per second and the web service cannot scale beyond 100 lookups per second. The data thus cached will be local to that workflow.
QUESTIONS PLEASE:
1. What is the best way to store the fetched data? Is it using local variables or global variables or some other storage mechanism?
2. Should the data thus cached be fetched each time the process (workflow) is re-started?
3. When the data is changed at the web service, the web service can send fresh data back to CPO to refresh the previously cached data. What is the best way to receive this data in order to refresh the local cache in CPO while the worklfow is running and operational [(i.e) without disrupting the workflow's running state]?
Will appreciate detailed answers please to avoid follow up questions.
thanks in advance,
Jamal
Solved! Go to Solution.
01-14-2013 04:37 PM
if the data is truly local to the workflow, use a process definition scoped variable.
But your options are process definition scoped variables, global variables, or service targets. Definition scoped and global variables are almost the same thing, varying by what other workflows can see and set the data.
01-14-2013 04:37 PM
if the data is truly local to the workflow, use a process definition scoped variable.
But your options are process definition scoped variables, global variables, or service targets. Definition scoped and global variables are almost the same thing, varying by what other workflows can see and set the data.
01-14-2013 04:57 PM
Von,
Thanks for the response to my first question. Will appreciate response to other questions please.
2. Should the data thus cached be fetched each time the process (workflow) is re-started?
3. When the data is changed at the web service, the web service can send fresh data back to CPO to refresh the previously cached data. What is the best way to receive this data in order to refresh the local cache in CPO while the worklfow is running and operational [(i.e) without disrupting the workflow's running state]?
4. Can the web service update the values contained in process scoped variables or global variables using CPO WS-API?
thanks,
Jamal
01-14-2013 06:29 PM
if you want to update the value from outside of the primary process asynchronously, external to the running process, then you need to store the data in a variable which can be accessed outside the running process. A global variable would be a better choice for that. This could also be achieved using a service target, but a global variable would be easier.
You can have your web service call the PO northbound web service to invoke a secondary PO process and pass in the data as a parameter. Then have the process set the value into the global variable.
If your data is specific to the process instance, you may want to use a table global variable, with one column which is the ID of the process instance or some other unique data to identify what is being cached, and another which is the value to cache. Be sure to have some logic to delete the row when a process instance goes away.
01-14-2013 08:28 PM
Thanks Von for attempting to answer my questions patiently. I did figure exactly what you said above through my own experiementation....however, given the enormity of my problem, I am discouraged from taking this approach. Let me elaborate the enormity of my problem:
- The web service DB contains 50,000 devices. For each device, the DB contains a numeric data for Utilization Threshold parameter.
- Let's assume that I have ONE PO process only for the purpose of this discussion. Each time this process is triggered to RUN, it goes to the web service to pick up the numeric data for a given device and store this info in tidal using a global table variable (DeviceIPaddress, UtilizationThresholdData). Idea behind building this global table of data in tidal is to reduce the number of web service lookups in the future runs of the PO process.
QUESTIONS:
1) Can the PO process create a global table variable containing say 20,000 entries? If not, what's the max limit?
2) Assuming the PO process can build a global table variable containing 20,000 entries, will a lookup for ThresholdData given a DeviceIPAddress be VERY SLOW making the PO process inefficient?
- Let's say at time Tn, the number of entries in the global table variable is 20,000.
- Let's say at time Tn+1, the ThresholdData for 8000 (of the 20,000) devices got changed in the web service DB.
QUESTIONS:
3) Can the web service send ThresholdData for 8000 devices using a table variable of type INPUT? I did not see a table variable for type INPUT supported by a PO process and hence this question.
4) Assuming table variable of type INPUT is not supported, how can the web service send update for 8000 devices in a single shot?
Will appreciate answers to all the questions please.
thanks,
Jamal
01-15-2013 05:48 PM
As we are in the design stage, any response that can help solve the above mentioned volume issue (with or without table variables) pertaining to updates coming into PO process will be appreciated. Thanks.
01-15-2013 06:27 PM
Sorry. Really busy here. You may want to contact TAC if you need timely response.
If you are integrating with a performance management system which has thresholds and/or measurement, it would be best to check those in the system designed for that (the performance manager) and only trigger PO when there is something that needs to be done outside of the tool. Use the performance manager for what it's good at, rather than replicating that type of thing into the process ochestrator.
Having a large number of processes or really large tables should drive you to ask about your use case and design for that use case. You may want to think about whether you really want 50,000 processes which may execute frequently or loop checking some value. I suspect there is a design problem here. By default, PO process cause a lot of DB activity to make them restartable across PO server restarts, etc. Definitely turn off persistence on your PO processes which execute this frequently.
So no, a table may not be the best design for this use case. But I don't know the whole use case.
To answer your direct questions, there is no maximum table size, but there is a maximum on the size of various selects against the table since the outputs of one activity are available to the downstream activities in the workflow and this can consume a lot of memory and possibly DB size. If you need something bigger, you may want to use a database. But generally, a table could work (though I question the design I'm anticipating).
if you use the select statement to query a table it's pretty fast. If you looped trying to find a match, it would be incredibly slow.
You cannot pass a table variable as an input parameter, but you can pass XML into a table parameter, use the load table from XML activity, etc.
There is an update row in table activity for tables. This should make the updates fast. You would still have to loop through the table converted from XML to update the thresholds table, which is not great if you are looking at thousands of table update in a frequently executed process. An XSL transform could do the work if you are up to that, but it's pretty advanced.
Teaching how to design for scalability is not necessarily something I can answer piecemeal with partial views of the use case. With such a large case with high performance concerns and scaling parameters, you may want to include some others in your design and implementation who hve programming and PO experience, such as Cisco services.
01-15-2013 07:34 PM
Indeed, this is not the best way to do design work of this nature. You have requirements about scaling-and-size, and in all computing scenarios this requires a tradeoff with speed and real-timeliness.
There are usually tweaks and optimizations that can be made. For example, we know next to nothing about the system that is the source for this data. It sounds like a performance tracking system of some kind that can send triggers. Most systems that fit that description also have thresholding capability, so you could only send updates or some requirement for action (workflow execution) to PO when a number is on the wrong side of a threshold. This usually reduces the volume of traffic by orders of magnitude.
How real-time do you need this to be? If you're polling every second for a change and you're getting 8000 important changes per second, and each of these requires some kind of multi-step Real Work (because you're using an orchestrator to do it), we're going to have a performance problem. Could you poll once every 5 minutes? Every 15 minutes? That makes a massive difference.
You can wrap table data in CDATA tags in XML, then read it into a table once it gets into the process. You could even do XSLT operations against it. PO uses the native .NET engine for this, so it's quite performant.
01-15-2013 09:00 PM
Thanks to Von and Mike for responding. I have summarized your responses to my specific questions. Please see if my summary/understanding of your responses is right.
QUESTIONS:
1) Can the PO process create a global table variable containing say 20,000 entries? If not, what's the max limit?
RESPONSE: There is no max limit to the number of rows in a table variable (global or local).
2) Assuming the PO process can build a global table variable containing 20,000 entries, will a lookup for ThresholdData given a DeviceIPAddress be VERY SLOW making the PO process inefficient?
RESPONSE: select statement to query a table it's pretty fast. If you looped trying to find a match, it would be incredibly slow.
- Let's say at time Tn, the number of entries in the global table variable is 20,000.
- Let's say at time Tn+1, the ThresholdData for 8000 (of the 20,000) devices got changed in the web service DB.
QUESTIONS:
3) Can the web service send ThresholdData for 8000 devices using a table variable of type INPUT? I did not see a table variable for type INPUT supported by a PO process and hence this question.
RESPONSE: wrap table data in CDATA tags in XML, then read it into a table once it gets into the process.
Now to answer some of your questions:
1. We are planning on using CPO 2.3 to act as a Real time Alert Notification tool to send email notifications to our NoC Operations personnel. We would like to send a notification within 3-5 mins of TCA generation (yes - TCAs come from a performance management tool such as InfoVista).
2. Tidal's workflow is expected to do some validation on a received TCA, before deciding to send a email notification. For example, Tidal will wait for 15 minutes upon receiving a MAJOR/CRITICAL TCA to see if a CLEAR arrives for that TCA within that 15 minutes interval. If so, Tidal workflow will generate a mail notification, if not, it will do nothing. Like this, we have many HIGHER LEVEL VALIDATION RULES that CANNOT BE PERFORMED AT PERFORMANCE MGR LEVEL. They can only be performed at the level of tidal that supports easy to write rules GUI and easy to change the rules in a matter of minutes.
3. No - we do not have 50,000 processes. We will have utmost 25 processes (workflows) - one for each TCA type coming out of InfoVista. Each process will know how to treat a TCA type before generating an email notification.
4. The number 50,000 corresponds to Devices in our Network. The validation rules that will be implemented in tidal workflow shall use data such as the 15 minutes mentioned above. An external DB shall maintain the list of 50K devices and the 'time-to-wait-for-CLEAR' data for each of the 50 K devices. Like 'time-to-wait-for-CLEAR' data, we have other data for other validation rules that will be implemented using tidal workflow for the various TCAs that we intend to support. This DB shall be exposed to Tidal workflow via a Web Services API.
Hope our use case is now clear. If so, please do add your 2c that will help answer my original questions. I can also go on a call to explain our use case. If you think I should engage with your services team, please send me their contact details and I will do so.
Thanks for all the support.
01-16-2013 06:17 AM
Jamal, the customer support forum is the wrong place for us to work through this discussion. Let's work separately on your use cases and design.
01-16-2013 06:51 AM
So you're building an event and notification system on top of your alarm manager?
Most FCAPS systems have the concept of an event, used to correlate different alarms under one "thing" that needs to be managed. Particularly useful when wanting to get notification of events like a threshold exceeded.
For many reasons that would take a long time to explain, I'd strongly recommend you don't build this in CPO (which include, but are not limited to: throughput performance, latency performance, synchronous processing concerns: primarily locking and lack thereof, memory footprint). Sure, build the event handler processes, but not the alarm + wait for clear => events handler part. Build that stand alone, it won't be very big, but will perform a lot better in Java or .Net.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide