Showing results for 
Search instead for 
Did you mean: 

How to secure the password when calling SACmd from a tidal job?

I'm attempting to use the SACmd command line from a Tidal job.   I have the job configured and is working correctly. 

My quesion, is how do i secure the password?  Currently, it is in the clear in the job parameters.   Is there a system variable i can set, which then isn't displayed?


This has been answered by others in older posts (which was where I got some tips from) and I believe the reference guide or the command line PDF guides may have this info. Essentially this involves using the -persist command option from the server (where sacmd is installed - usually the master)  which then saves the particular credentials within users local home on the server so that you don't have to add password in the tidal job you create after that.

1. Prior to running sacmd on the server as a job in TES61, you will need to verify that it can run via command line outside of TES GUI.

[serveradmin@masterserv bin]$ pwd


[serveradmin@masterserv bin]$ ./

TES SA Command Console (Version

JRE Version : 1.7.0_25

Style       : table

DSP URL     :

Username    : domain\interactiveuser

Password    :


2. So that you don’t have to append login credentials required by TES 61 in your command parameters in the job definition, you will have to run the command below.  This uses the –persist parm, which stores the interactive user credential in the users local home one time.  You will need to run this again for another user and for when passwords change.

[serveradmin@masterserv bin]$ ./ -cmdspurl -user 'domain\serveradmin' -pass

'********' -persist

TES SA Command Console (Version

JRE Version : 1.7.0_25

Style       : table

DSP URL     :

User        : domain\interactiveuser

Connection info saved successfully

[serveradmin@masterserv bin]$ ./ -help

NOTE: the User has to be an interactive user in TIDAL in order for this to work.  In this particular case the Tidal job will use masterserv as the agent, and UNIX\serveradmin as the runtime user (since that is the user that owns the executable binaries).  But it is actually going to use the domain\interactiveuserTIDAL interactive user security to run sacmd.  If domain\interactiveuser doesn’t have security policy rights to perform something in sacmd, command will fail.

This will save your credentials in this location:

[serveradmin@masterserv tescmd]$ pwd


[serveradmin@masterserv tescmd]$ ls -lat

total 12

-rw-rw-r-- 1 serveradmin serveradmin   76 Aug 23 14:13 .connection

drwxrwxr-x 2 serveradmin serveradmin 4096 Aug 23 14:12 .

drwx------ 4 serveradmin serveradmin 4096 Aug 23 14:12 ..

[serveradmin@masterserv tescmd]$ vi .connection



That's it, so when you run your sacmd , you don't have to pass any credentials from the command parameter.






Thank you Carolanne!


Yes, I ran into this with my initial configuration.  I named the CMs 1 and 2 initially to make it easier for users/me when troubleshooting.

I had to redo it and named them the same.  And did the configuration above to both CMs.  Now its a pain when troubleshooting since you have more to do to figure out where a user was routed.

So instead of below:

[serveradmin@masterserv bin]$ ./ -cmdspurl -user 'domain\serveradmin' -pass ######

it became:

[serveradmin@masterserv bin]$ ./ -cmdspurl -user 'domain\serveradmin' -pass ######

I am using SSL offloading with the load balancer.  So the non-LB route doesn't use SSL but LB route uses SSL.






Thanks for the quick respones but I am not sure i follow.

1. Orig you had cm1,cm1

2. renamed them both cm???

3. issued sacmd with dnsname for F5/api/cm

4. Only us ssl on 1 of cms using F5 config?


did you rename cm's back to cm1,cm2?




Originally I had my client managers named TesDEVCM1 and TesDEVCM2

That prevented me from using loadbalancer for sacmd so

I had to rename them to be the same, so they are now both called:



The SSL comment I had was extraneous, I just wanted to explain why I was using http to access for both CMs directly (not using Load balancer), but with load balancer it is https.  'SSL Offloading' is a feature of the LB so that your Tidal webservers/CMs can focus on doing Tidal work and not have additional overhead of handling SSL.

You don't have to do it this way.  But without SSL, your are passing your AD login credentials throughout your intranet in clear text.  You can also set up SSL on the CMs instead of the Load balancer appliance.




Than you so much for your info, We are new to the F5 and multiple cm's. If you would not mind I have another question. We have our master in a FM config and 2 cm's pointing to the fault tolerant master cmtidalprod1,prod2. How do you have 2 cm's with the same name or I guess can you have 2 with the same name pointing to the same master. And where do you specify the name?


Pardon the lack of knowledge in this area, your input is very much appreciated.




I happen to be researching another issue using forum and just saw your question:

- Actually you don't have change the name of the CM if you want to use the default name Tidal gives.  This is during the installation of the CM. The point it to make sure that if you rename that you don't use different name.  The CM name default I believe is like 'tes-60...' (cant remember the rest since we renamed ours).  your *.dsp file in config tells you the name of your CM.  So our *.dsp is called TesDEVCM.dsp.

So if yours are already named the same for both CMs then don't worry about it.  If it is not the same, then I believe all I did was change the name of the dsp file (can't remember exactly you may want to log case to verify) then I cleared the CM cache files in the server and restarted.


Can't remember architecture in detail now, but I believe the master does not know about the CM or how many CMs you have.  The CM knows about the master database since its database syncs to it.  If you use transporter, it will also need the DSPURL same thing if you use the iPhone app.


We also use FT master configuration as well.  I am using agent list for my sacmd Tidal jobs.  My sacmd binaries are on both masters and so I added agents to the masters and added those agents to the agent list.  This way my sacmd jobs will always run even when one agent is being patched (which we do once a month).    So I had to run the sacmd -persist on both agents' server to save the credentials.  I don't remember if you have to have sacmd binaries on the master server (I am thinking maybe not).

At one point I knew all these by heart but I have been moved around to different projects and have not retained all the nit picky details -  but logging a case with Cisco may help verify some of these. 




Would you have any experience on this topic if you had 2 Clientmgrs' behind a F5 load balancer. Since the .connection file is based on the cm that you logged into. Each time you log in you could be in cm1 or cm2, any iseas would help Thanks




This widget could not be displayed.