Got a scenario whereby I need to add an event to a file event. We've got a file watcher. If the file turns up the job gets called. All happy.
What we need is to send an email alert if the file doesn't turn up within a prescribed window. Effectively this is an event called if another event didn't happen. Alternatively I'd need to put an event on the group not being called. Since the job group is loaded into the schedule effectively ad-hoc I'm not sure how I'd do this either. So far the only method I've got in my head is to add and action to the main group which writes a marker file. I then add another suitably timed job outside of the main group which looks for that file and if it doesn't find it ( meaning the group never ran), it triggers the email.
Anyone got a better idea? This sounds a bit convoluted. I'm thinking there's probably a more elegant way.
Would it work to setup a job event with the trigger "Job not ready by end of its time window"?
You would have to specify a time window on the job itself for this method to work, but it would be fairly straightforward.
Both approaches might work. We're going to play with this and see how we go. It depends to some extent on how some 3rd parties operate. For example will they stick to a fixed delivery window?
If they do great. If they don't it makes setting job windows a bit more difficult.
This might help you take it in a different direction:
We created an ALERT Top Level Group where we do all of our file based alerting
We chose to organize by time windows so they line up numerically top to bottom as a 24 hour day unfolds:
0000_0259 ( is midnight to 3 am window)
We use reverse file dependency logic. If a file does not exist at the end of a time window the "not exists" depen denecy is MET the job then "complete normally" which is BAD and we have an event that sends and email on completed normally. If the file exists then the dependency is of "not exists" can't be met and we intentionally "timeout" when "job not ready by end of its time window" and with an event set the status to "skipped". Our operations team ignores skipped as that is "OK" as the files did exists as expected.
Typically we have these file dependencies pointed towards our archive area. This works for us because we also have a File-based Pub, Sub Job Set that deals with distribution and archiving. When a file is exported internally or is delivered externally these File Jobs handle all the delivery to all subscribers then archive as a last step. The alerts take advantage of this so if we ever got a file in for the day it winds up ina dated archive folder ...\YYYY\MM\DD\ <file>
We are a windows shop so we leverage powershell as a command and write-output "<custom text>" that is put into emails via <JobOutput> we usually can get away with a generic per group that cares about the file.
its efficient as a poor mans file based alerting but does result in some job sprawl depending on the number of files you want to alert on