So, we're upgrading an application environment from its simple little single Unix box to a three-headed clustered servers (still Unix) which share the application's shared file system via a SAN.
At present, I have a variety of file events watching for periodic file drops which instantiate job insertions based on those files.
How best do I go about making sure I can still see files if one of the heads of the application server dies? File events & agent lists don't go together (I'm currently running 5.3.1, I should point out)... so, do I have to set up another two file events per file? (Yuck.) Or is there another, simpler answer?
Thanks,
Dave Martin