cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6845
Views
0
Helpful
6
Replies

Unable to mount NFS repositories for Cisco Prime Infrastructure remote backups

Eric R. Jones
Level 4
Level 4

We have had our Prime server on a an NFS share for a few years.

We are deploying a new server, Virtual Machine PI 3.6.

It's configured and servicing some test switches so it's operational.

When I went to configure the repositories from the CLI all went well until I tried to use them.

The backup runs but returns the error %Error mounting NFS location.

However, I can cd into the directory, /mnt/repos, view what is in the directory and create a new file, view current files and remove files. I'm doing this as the root admin gaining access through the shell first and then running sudo su.

I tried changing the permissions on /mnt from 755 to 775 and 777 but no change.

I can create an alternative by running the backup command to create a local file and then use the crontab to fire off a job to copy the file from the local repo /localdisk/defaultRepo to the NFS mount /mnt/repo and then delete the local file.

I had to do this earlier when the same issue arose. That was solved by having the system guys fix the Windows side and allow the server through. That wasn't changed so I'm not sure what's going on now.

Anyone have this issue before?

I didn't find anything on a search in the community nor in google.

 

ej

1 Accepted Solution

Accepted Solutions

The fix for this in my case was soooo simple.

Unmount the /mnt/repos directory I created, edit the nfstab file and remark out the line that mounts /mnt/repos at boot.

After that I ran the backup command and:

Backup Started at : 12/04/19 10:52:05
Stage 1 of 7: Database backup ...
Database size: 33G
-- completed at 12/04/19 10:55:19
Stage 2 of 7: Database copy ...
-- completed at 12/04/19 10:55:19
Stage 3 of 7: Backing up support files ...
-- completed at 12/04/19 10:55:22
Stage 4 of 7: Compressing Backup ...
-- completed at 12/04/19 10:55:53
Stage 5 of 7: Building backup file ...
-- completed at 12/04/19 11:00:47
Stage 6 of 7: Encrypting backup file ...
-- completed at 12/04/19 11:03:03
Stage 7 of 7: Transferring backup file ...
-- completed at 12/04/19 11:05:42
% Backup file created is: yvaprm1test4-191204-1052__VER3.6.0.601.7_BKSZ30G_CPU4_MEM3G_RAM11G_SWAP15G_SYS_CK887884907.tar.gpg
Total Backup duration is: 0h:13m:42s

 

Success!!!!!

I did reconfigure the url for nfs mount to nfs://yvwdat1.srf.local:/<location name>/<location name>.

I then ran a test with the original repository format nfs://yvwdat1.srf.local/<location name>/<location name> which failed.

I checked and the /mnt/repos directory wasn't mounted.

I then reconfigured the repository pointer back to the one using ":" in 2 locations and it worked.

So for me this is working.

ej

 

View solution in original post

6 Replies 6

marce1000
VIP
VIP

 

 - I presume that for some reason Prime get's confused when opening or accessing/dropping files on an NFS share. Reason is unclear perhaps in the access protocols some command is used which does not taken into account that files are on an NFS share (so I don't know either whether this is supported). Perhaps it's best to access the location via a FTP or SFTP based repository-definitions.

 M.



-- ' 'Good body every evening' ' this sentence was once spotted on a logo at the entrance of a Weight Watchers Club !

renjithg
Cisco Employee
Cisco Employee

Verify following steps followed for setting NFS staging and remote storage for Prime backup.

Configurations:-
Steps to be done on the Active Server/Source Server:-
[1] Configure the staging URL to the Server where you want to take the backup.
Ex:- If 10.106.33.78 is the active server and 10.106.33.105 is the inactive instance where you want to put  the backup, You can create the staging url along with mentioning the path where you want the staging to trigger. Staging is the path where the initial data will be transferred temporarily to tar the file later.

backup-staging-url nfs://10.106.33.105://localdisk/defaultRepo/staging

[2] Now Configure the repository pointing to the other instance of the prime along with the target location accordingly.
Note:- The staging and storage folders can be on the same NFS server, or on separate NFS servers. If you plan to stage and store on separate NFS servers, you will need IP addresses for both servers

Ex:-
repository nfs
  url nfs://10.106.33.105://localdisk/defaultRepo/storage/

[3] Login to Shell of the Prime and start the nfs
ade # service nfs start

Do not miss the command:


Steps to be done on the In-Active Server/Target Server where you put the backup:-


[1] Login to Shell of the Target Prime and start the nfs,rpcbind
ade # service nfs start
ade # service rpcbind start
[2] Create the Target locations ( Both Storage and Staging )
mkdir -p /localdisk/staging
mkdir -p /localdisk/storage
[3] Give Complete permissions to those directories:-
chmod 777 /localdisk/storage
chmod 777 /localdisk/staging
[4] Create the exports accordingly. Steps to do as below:-
sudo su -
vi /etc/exports
Type Esc + i . Now you will be able to input the text in to the exports file. You can give the permissions as below:- ( 10.106.33.78 is the Active Primary instance)
/localdisk/defaultRepo/staging 10.106.33.78(rw,sync)
/localdisk/defaultRepo/storage 10.106.33.78(rw,sync)

[5] Export the locations using the command:-
exportfs -arv
Sample output:-
ade# exportfs -arv
exporting 10.106.33.78:/localdisk/defaultRepo/storage            --------
à This must result like this as staging and storage must be exported successfully to Prime server for mounting.
exporting 10.106.33.78:/localdisk/defaultRepo/staging

[+] Modify the services as below:-
service iptables stop
chkconfig iptables off
service rpcbind restart

Verify:-
[1] On the Primary server after Successful configuration of the Target server, Enter the shell  and type the command:- ( Target server used here is 10.106.33.105 )

ade # rpcinfo -p 10.106.33.105
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper

Check whether you are able to see the shared stage and storage folders on the remote NFS server from the Prime Infrastructure server.
If the output of this command does not show the NFS service and its associated ports on the NFS server. you need to restart the NFS service on the Prime Infrastructure server.
[2] From the admin mode of Prime, Please verify using the command:-
PI33/admin# show repository nfs
a1
test-180428-0313.tar.gz

Once Everything has been verified, Point the backup trigger to the NFS configured server in the UI Under Administration -> Jobs -> Server backup -> Edit  You can Trigger the backup in the target location Successfully.

 

  • Ideally the Prime server file system partition must show like this:

 

-bash-4.1# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/smosvg-rootvol

                      3.9G  327M  3.4G   9% /

tmpfs                 5.9G  1.8G  4.1G  31% /dev/shm

/dev/mapper/smosvg-altrootvol

                      124M  5.6M  113M   5% /altroot

/dev/sda1             485M   41M  419M   9% /boot

/dev/mapper/smosvg-home

                      124M  6.0M  112M   6% /home

/dev/mapper/smosvg-localdiskvol

                       44G   18G   24G  44% /localdisk

/dev/mapper/smosvg-optvol

                      203G   57G  136G  30% /opt

/dev/mapper/smosvg-recvol

                      124M  5.6M  113M   5% /recovery

/dev/sda2              97M  5.6M   87M   7% /storedconfig

/dev/mapper/smosvg-storeddatavol

                      9.7G  151M  9.0G   2% /storeddata

/dev/mapper/smosvg-tmpvol

                      2.0G   42M  1.8G   3% /tmp

/dev/mapper/smosvg-usrvol

                      6.8G  1.2G  5.3G  18% /usr

/dev/mapper/smosvg-varvol

                      3.9G  192M  3.5G   6% /var

10.106.33.105://localdisk/defaultRepo/staging                ------------à This mount point was happening since exportfs from NFS server was not working from NFS remote server.

                      203G   28G  166G  15% /opt/backup

 

In PI 3.5, we had issue in NFS remote backup of Prime application. But in 3.6, it is not.

So far the only difference between you have laid out here and what I have is I didn't have the backup-staging-url configured any longer. I had removed it, then put it back but no matter what the failure output states it can't mount the staging location. If I cd into the mount point /mnt/repos/ I can access the NFS location, manipulate the files and exit. I can create a backup locally on Prime server and then manually cp or mv the file to the NFS share.

The rpcinfo -p <IPAddress> returns the expected result. NFS client appears to be working properly based on the output of commands and my ability to access the locations. For some reason it won't mount it. I thought it might have been a permissions issues but it doesn't appear to be.

 

ej

Well no joy on this method. I even waited  until we migrated the new PI 3.6 server from its temporary IP to the permanent IP replacing the old 3.1 server. It fails to mount the staging url, fails to mount the repo nfs url but if run locally with the script it will run the backup, copy it to the nfs mount and remove the old file. I checked the permissions and ownership of files and directories  but nothing appears to be wrong. 

 

ej

The fix for this in my case was soooo simple.

Unmount the /mnt/repos directory I created, edit the nfstab file and remark out the line that mounts /mnt/repos at boot.

After that I ran the backup command and:

Backup Started at : 12/04/19 10:52:05
Stage 1 of 7: Database backup ...
Database size: 33G
-- completed at 12/04/19 10:55:19
Stage 2 of 7: Database copy ...
-- completed at 12/04/19 10:55:19
Stage 3 of 7: Backing up support files ...
-- completed at 12/04/19 10:55:22
Stage 4 of 7: Compressing Backup ...
-- completed at 12/04/19 10:55:53
Stage 5 of 7: Building backup file ...
-- completed at 12/04/19 11:00:47
Stage 6 of 7: Encrypting backup file ...
-- completed at 12/04/19 11:03:03
Stage 7 of 7: Transferring backup file ...
-- completed at 12/04/19 11:05:42
% Backup file created is: yvaprm1test4-191204-1052__VER3.6.0.601.7_BKSZ30G_CPU4_MEM3G_RAM11G_SWAP15G_SYS_CK887884907.tar.gpg
Total Backup duration is: 0h:13m:42s

 

Success!!!!!

I did reconfigure the url for nfs mount to nfs://yvwdat1.srf.local:/<location name>/<location name>.

I then ran a test with the original repository format nfs://yvwdat1.srf.local/<location name>/<location name> which failed.

I checked and the /mnt/repos directory wasn't mounted.

I then reconfigured the repository pointer back to the one using ":" in 2 locations and it worked.

So for me this is working.

ej

 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: