cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3211
Views
5
Helpful
5
Replies

Unable to backup ACS 1121 V5.x

Eric R. Jones
Level 4
Level 4

        Hello, first let me say that I'm about to go on vacation but I still can read emails and relay stuff back to those who are still here at work.

We have 4 CSACS 1121's at version 5.5.

Though the updates have fixed various bugs the one thing that seems to have been broken since early 5.4 is the ability to backup the ACS.

When I ran the applicaton upgrade function it successfuly made a local backup of the system; however, it won't make a backup now for love nor money.

We have tried sending it to our Solaris ZFS S7120 storage device directly and we get the below error:

When I run it from the CLI I get:

backup yacs001 repo Backup
% backup in progress: Starting Backup...10% completed
% Creating backup with timestamped filename: yacs001-131219-0816.tar.gpg
Please enter backup encryption password [8-32 chars]:
Please enter the password again:
% backup in progress: Backing up ADEOS configuration...55% completed
AfCalculating disk size for /opt/backup/backup-yacs001-1387408604
Total size of backup files are 94 M.
Max Size defined for backup files are 97591 M.
% backup in progress: Moving Backup file to the repository...75% completed
% Failure occurred during request

We have tried sending it to a Solaris server and get the below error:

backup yacs001 repo yuds001

% backup in progress: Starting Backup...10% completed

% Creating backup with timestamped filename: yacs001-131219-0828.tar.gpg

Please enter backup encryption password [8-32 chars]:

Please enter the password again:

% backup in progress: Backing up ADEOS configuration...55% completed

Calculating disk size for /opt/backup/backup-yacs001-1387409300

Total size of backup files are 94 M.

Max Size defined for backup files are 97591 M.

% backup in progress: Moving Backup file to the repository...75% completed

% SSH connect error

I can simply ssh to a server from the ACS's CLI and get in with no problem.

When I run it from with the GUI of course I see nothing and trying to check the logs has produced nothing usefull.

I have googled around and other people are having this issue.

I was going to open a TAC on this but decided to start here and see if anyone has a fix for this.

ej

I did some more searching for ways to get troubleshooting / debug data and found this

debug backup-restore all 7

backup yacs001 repo Backup

7 [18649]:[debug] backup-restore:backup: br_backup.c[233] [nela]: initiating backup yacs001 to repos Backup

6 [18649]:[info] backup-restore:backup-logs: br_backup.c[152] [nela]: backup in progress:Starting Backup...10% completed

% backup in progress: Starting Backup...10% completed

7 [18649]:[debug] backup-restore:backup: br_backup.c[271] [nela]: no staging url defined, using local space

7 [18649]:[debug] backup-restore:backup: br_backup.c[50] [nela]: flushing the staging area

7 [18649]:[debug] backup-restore:backup: br_backup.c[295] [nela]: creating /opt/backup/backup-yacs001-1387417532

7 [18649]:[debug] backup-restore:backup: br_backup.c[299] [nela]: creating /opt/backup/backup-yacs001-1387417532/backup/cars

% Creating backup with timestamped filename: yacs001-131219-1045.tar.gpg

7 [18649]:[debug] backup-restore:backup: br_backup.c[341] [nela]: creating /opt/backup/backup-yacs001-1387417532/backup/acs

Please enter backup encryption password [8-32 chars]:

Please enter the password again:

6 [18649]:[info] backup-restore:backup: br_backup.c[401] [nela]: adding ADEOS files to backup

6 [18649]:[info] backup-restore:backup-logs: br_backup.c[152] [nela]: backup in progress:Backing up ADEOS configuration...55% completed

% backup in progress: Backing up ADEOS configuration...55% completed

6 [18649]:[info] backup-restore:backup: br_backup.c[98] [nela]: Invoke /opt/CSCOacs/mgmt/cli/bin/acsDiskSizeCheckUtil.sh script for acs

Calculating disk size for /opt/backup/backup-yacs001-1387417532

Total size of backup files are 94 M.

Max Size defined for backup files are 97591 M.

6 [18649]:[info] backup-restore:backup-logs: br_backup.c[152] [nela]: backup in progress:Moving Backup file to the repository...75% completed

% backup in progress: Moving Backup file to the repository...75% completed

7 [18649]:[debug] backup-restore:backup: br_backup.c[50] [nela]: flushing the staging area

7 [18649]:[debug] backup-restore:history: br_history.c[252] [nela]: running date

7 [18649]:[debug] backup-restore:history: br_history.c[76] [nela]: obtained backup history lock

7 [18649]:[debug] backup-restore:history: br_history.c[160] [nela]: loaded history file /var/log/backup.log

7 [18649]:[debug] backup-restore:history: br_history.c[118] [nela]: stored backup history file

7 [18649]:[debug] backup-restore:history: br_history.c[90] [nela]: released backup history lock

7 [18649]:[debug] backup-restore:history: br_history.c[310] [nela]: added record to history

3 [18649]:[error] backup-restore:backup: br_backup.c[457] [nela]: Backup failed: copy yacs001-131219-1045 out to repository Backup failed

% Failure occurred during request

6 [18649]:[info] backup-restore:backup: br_cli.c[468] [nela]: error message: Failure occurred during request

I highlighted and underlined some items that stand out for me.

The first I find odd since I have created 3 diferent types of repositories which should be considered url staging areas:

repository Backup

  url sftp://10.30.0.82/export/SRFASABackups/YACS001

  user root password hash 7fb544431c36fc2a209d4a5dc9dc5b9c4f154209

repository SRFASABackups

  url nfs://10.30.0.86:/SRFASABackups/YACS001

  user ECmgr password hash bd676e188e0eb748efa2a923710092121aabf4b3

repository yuds001

  url sftp://10.30.0.86/export/home/ECmgr

  user ECmgr password hash bd676e188e0eb748efa2a923710092121aabf4b3

The second is odd to me because if the system failed to copy data out to the repository wouldn't something be left on the local server?

I tried it by creating a local backup repo and here's the debug output:

yacs001/nela# backup yacs001 repo LocalBackup

7 [20511]:[debug] backup-restore:backup: br_backup.c[233] [nela]: initiating backup yacs001 to repos LocalBackup

6 [20511]:[info] backup-restore:backup-logs: br_backup.c[152] [nela]: backup in progress:Starting Backup...10% completed

% backup in progress: Starting Backup...10% completed

7 [20511]:[debug] backup-restore:backup: br_backup.c[271] [nela]: no staging url defined, using local space

7 [20511]:[debug] backup-restore:backup: br_backup.c[50] [nela]: flushing the staging area

7 [20511]:[debug] backup-restore:backup: br_backup.c[295] [nela]: creating /opt/backup/backup-yacs001-1387418264

7 [20511]:[debug] backup-restore:backup: br_backup.c[299] [nela]: creating /opt/backup/backup-yacs001-1387418264/backup/cars

% Creating backup with timestamped filename: yacs001-131219-1057.tar.gpg

7 [20511]:[debug] backup-restore:backup: br_backup.c[341] [nela]: creating /opt/backup/backup-yacs001-1387418264/backup/acs

Please enter backup encryption password [8-32 chars]:

Please enter the password again:

Password does not match

7 [20511]:[debug] backup-restore:backup: br_backup.c[50] [nela]: flushing the staging area

3 [20511]:[error] backup-restore:backup: br_backup.c[384] [nela]: backup script failed

7 [20511]:[debug] backup-restore:history: br_history.c[252] [nela]: running date

7 [20511]:[debug] backup-restore:history: br_history.c[76] [nela]: obtained backup history lock

7 [20511]:[debug] backup-restore:history: br_history.c[160] [nela]: loaded history file /var/log/backup.log

7 [20511]:[debug] backup-restore:history: br_history.c[118] [nela]: stored backup history file

7 [20511]:[debug] backup-restore:history: br_history.c[90] [nela]: released backup history lock

7 [20511]:[debug] backup-restore:history: br_history.c[310] [nela]: added record to history

% Application backup error

6 [20511]:[info] backup-restore:backup: br_cli.c[468] [nela]: error message: Application backup error yacs001/nela# backup yacs001 repo LocalBackup
7 [20511]:[debug] backup-restore:backup: br_backup.c[233] [nela]: initiating backup yacs001 to repos LocalBackup
6 [20511]:[info] backup-restore:backup-logs: br_backup.c[152] [nela]: backup in progress:Starting Backup...10% completed
% backup in progress: Starting Backup...10% completed
7 [20511]:[debug] backup-restore:backup: br_backup.c[271] [nela]: no staging url defined, using local space
7 [20511]:[debug] backup-restore:backup: br_backup.c[50] [nela]: flushing the staging area
7 [20511]:[debug] backup-restore:backup: br_backup.c[295] [nela]: creating /opt/backup/backup-yacs001-1387418264
7 [20511]:[debug] backup-restore:backup: br_backup.c[299] [nela]: creating /opt/backup/backup-yacs001-1387418264/backup/cars
% Creating backup with timestamped filename: yacs001-131219-1057.tar.gpg
7 [20511]:[debug] backup-restore:backup: br_backup.c[341] [nela]: creating /opt/backup/backup-yacs001-1387418264/backup/acs
Please enter backup encryption password [8-32 chars]:
Please enter the password again:
Password does not match
7 [20511]:[debug] backup-restore:backup: br_backup.c[50] [nela]: flushing the staging area
3 [20511]:[error] backup-restore:backup: br_backup.c[384] [nela]: backup script failed
7 [20511]:[debug] backup-restore:history: br_history.c[252] [nela]: running date
7 [20511]:[debug] backup-restore:history: br_history.c[76] [nela]: obtained backup history lock
7 [20511]:[debug] backup-restore:history: br_history.c[160] [nela]: loaded history file /var/log/backup.log
7 [20511]:[debug] backup-restore:history: br_history.c[118] [nela]: stored backup history file
7 [20511]:[debug] backup-restore:history: br_history.c[90] [nela]: released backup history lock
7 [20511]:[debug] backup-restore:history: br_history.c[310] [nela]: added record to history
% Application backup error
6 [20511]:[info] backup-restore:backup: br_cli.c[468] [nela]: error message: Application backup error

5 Replies 5

Amjad Abdullah
VIP Alumni
VIP Alumni

Hello.

What is your repository type (FTP, SFTP, NFS...etc)?

Have you tried to copy to a different machine repository with different OS (Windows or Mac)? Just to isolate?

Regards,

Amjad

Rating useful replies is more useful than saying "Thank you"

Rating useful replies is more useful than saying "Thank you"

I apologize for the extremely late reply. I hope your still available.

We have had major disruptions in our work space which may be settling down now.

What we have are 4 (finally online) ACS 1121's running 5.5.

My plan is to run regular backups to a Solaris ZFS S7120 storage appliance.

I can use SFTP or NFS.

I prefere to use SFTP.

Right now I'm still getting the SSH connection error with SSH and with NFS it doesn't see the mount point. I forget the exact error.


At one point I had it reporting that the backup's were completed at 100% but when I checked the device there wasn't any file on the target nor on the source.

To get to that point I had to fix the RSA keys on the appliance and on the ACS.

Unfortunately it's back to the SSH connection error.

ej

I realise this is an old post but it was the only hit I found when googling so I thought I'd throw in my 2 cents worth for the next unlucky soul that google sends this way.

I had similar problems, that turned out to be issues with the ssh host keys. I added the host key by ip address instead of host name without thinking too much about it.

I stumbled upon the solution by accident.  If you log in to the acs and execute the command

show repository <<your repo name>>

it should give you a file listing for the remote repository if all is well with the keys.  It will complain if you have host key issues, and list a number of possible causes. Having set the host key by ip address the day before I finally made the connection that it had to be done by hostname.  Most likely it has to be in whatever format you specified the url for the repository.

acs01/admin# show crypto host_keys
2048 37:15:75:80:79:3e:00:b6:c0:9d:df:19:93:b7:e9:f8 192.168.1.100 (RSA)
2048 37:15:75:80:79:3e:00:b6:c0:9d:df:19:93:b7:e9:f8 linuxbox (RSA)


acs01/admin# crypto host_key delete host linuxbox
host key fingerprint for linuxbox removed

acs01/admin# show repository  linuxbox
% Error : Operation failed due to one of the following reasons
1. host key is not configured
2. host key is removed because of re-image
3. host key is removed from some other repository having same ip/hostname
% Please add the host key using the crypto host_key exec command
% SSH connect error

acs01/admin# crypto host_key add host linuxbox
host key fingerprint added
# Host linuxbox found: line 2 type RSA
2048 37:15:75:80:79:3e:00:b6:c0:9d:df:19:93:b7:e9:f8 linuxbox (RSA)

acs01/admin# show crypto host_keys
2048 37:15:75:80:79:3e:00:b6:c0:9d:df:19:93:b7:e9:f8 192.168.1.100 (RSA)
2048 37:15:75:80:79:3e:00:b6:c0:9d:df:19:93:b7:e9:f8 linuxbox (RSA)

acs01/admin# show repository  linuxbox
acsviewdbfull_acs01_20150723_103750.tar.gpg                              
acsviewdbfull_acs01_20150726_023500.tar.gpg                              
acsviewdbfull_acs01_20150731_020000.tar.gpg                              
.
.
.

Hi Robert,

Interesting post, I was able to create this issue on ACS 5.8 installed on a VM (ESXI 5.5 hypervisor). Works for hostname, doesn't work for IP.

Cisco TAC should look into this.

You are correct. I just tested it myself and an sftp repository with an ip address in the url doesn't work.

I didn't try it with public key auth though so maybe that's a workaround if you need to use an  ip addresses in the url.  Easiest mitigation is to make the host resolvable in DNS

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: