cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3152
Views
0
Helpful
5
Replies

Tidal SFTP job

5sprabhakaran
Level 1
Level 1

We are using tidal job to ftp files everyday across servers in different regions . I have an issue with a SFTP job in which Im unable to find how to set exit code, if there is no file to pick . Actually the user wants mail alert if the job "completed abnormally". But my job completing normally saying 0 files transferred . How to set the job to complete abnormally if it has not picked any files from the source location . I actually tried "scan output abnormal string" as "cpabnrm" . is it correct or what is the option to set the job to complete abnormally if it is not picked any file ?

5 Replies 5

Maximilian Gallagher
Cisco Employee
Cisco Employee

In the job definition, Run tab, in the "tracking" section you can determine whether the job completes abnormally given a certain string in the output. So if you wanted it to complete abnormally if the output contained "File not found." You could check "Scan output: Abnormal String(s)" and type in "File Not Found" for the textbox. The syntax for multiple different strings to match on is this: "example","of","multiple","matches"

Been there, done that.

The problem I ran into, is since we've gone to 64 bit servers, 'sometimes' the output isn't captured by Tidal to store back in the Tidal database, thus, the job never sees these 'grep console outputs' to initiate an event its been for.

On critical jobs, I'll prefix the core job and with a job that simply 'discovers' what out there and what we've received already, then send the next job out to actually do what the user needs. I then send a third job (at times) to confirm the 2nd and compare the first results to the 3rd.

A lot of work?

Definitely, but my organization's strength is partnership with other outside entities, which means the Tidal job has to hop thru multiple IT checkpoints and layers organizational 'turf' in the OSI model, that my superiors have no domain over and these are critical jobs we're dealing with for the org.

If all you want to invest is to inspect the console output, then placing those strings in the area are the way to go.

Remember to quote (single or double) the phrases you need and use the comma to say 'OR' and plus '+' to require all phrases for your grep. I think it says something in the docs about this.

Also, the white space you're not seeing will play tricks on you for these triggers, so be prepared for that one, also, some content doesn't interpret completely at times (I think it deals with encoded pages of text). IDK...

Take small wins as the way to go with this, instead of solving the whole deal in one fair swoop.

Also, if you're dealing with a proxy server, you can end up with transferring only the name, and the content never gets moved (because of the proxy server). You might need to build something in for this situation if you hit it also, because you'll end up with a filename, but a zero byte file transferred, which goes back to my original comment.

Good luck

If you're discussing problems with windows 64-bit applications, there is probably a workaround available to you in putting the 64-bit exe files in the agent's bin folder or placing the .exe file outside of the 64-bit system folder (windows\system32). Windows prevents 32 bit applications (agent) from running 64-bit applications in the windows\system32 folder. You may be running a 32-bit version of your application without realizing, since the agent would revert to the 32 bit version in the syswow64 folder. A common situation with powershell jobs.

While we're off topic, but still addressing the last post, my job failure (of not capture console logs) are random in their nature and I can't predictably replicated the event, so I can't lean towards dropping an executable in the syswow64 folder.

They work more than half the time, it's the other half I wish could be predictable.

But thanks for thinking of my situation with me-

Have had this issue as well.  Here is how I solve it

1.  create job A which reaches out to the counterparty and attempts to ftp/SFTP the file in to a staging location 

2.  set the job to rerun every 5 minutes

3.  define an event on the job which emails me if the job errors out or completes abnormally

4.  create job B to move the file from staging location to a final location where my users go to consume the files

5.  make a file dependency on job B so that it does not fire until the file(s) desired have arrived.  

6.  define a completed normally event on Job B so that users are notified upon file receipt.  

7.  define completed abnormally/error error events as per your standard for job B

Hope this helps.