We have had our Prime server on a an NFS share for a few years.
We are deploying a new server, Virtual Machine PI 3.6.
It's configured and servicing some test switches so it's operational.
When I went to configure the repositories from the CLI all went well until I tried to use them.
The backup runs but returns the error %Error mounting NFS location.
However, I can cd into the directory, /mnt/repos, view what is in the directory and create a new file, view current files and remove files. I'm doing this as the root admin gaining access through the shell first and then running sudo su.
I tried changing the permissions on /mnt from 755 to 775 and 777 but no change.
I can create an alternative by running the backup command to create a local file and then use the crontab to fire off a job to copy the file from the local repo /localdisk/defaultRepo to the NFS mount /mnt/repo and then delete the local file.
I had to do this earlier when the same issue arose. That was solved by having the system guys fix the Windows side and allow the server through. That wasn't changed so I'm not sure what's going on now.
Anyone have this issue before?
I didn't find anything on a search in the community nor in google.
- I presume that for some reason Prime get's confused when opening or accessing/dropping files on an NFS share. Reason is unclear perhaps in the access protocols some command is used which does not taken into account that files are on an NFS share (so I don't know either whether this is supported). Perhaps it's best to access the location via a FTP or SFTP based repository-definitions.
Verify following steps followed for setting NFS staging and remote storage for Prime backup.
Steps to be done on the Active Server/Source Server:-
 Configure the staging URL to the Server where you want to take the backup.
Ex:- If 10.106.33.78 is the active server and 10.106.33.105 is the inactive instance where you want to put the backup, You can create the staging url along with mentioning the path where you want the staging to trigger. Staging is the path where the initial data will be transferred temporarily to tar the file later.
 Now Configure the repository pointing to the other instance of the prime along with the target location accordingly.
Note:- The staging and storage folders can be on the same NFS server, or on separate NFS servers. If you plan to stage and store on separate NFS servers, you will need IP addresses for both servers
 Login to Shell of the Prime and start the nfs
ade # service nfs start
Do not miss the command:
Steps to be done on the In-Active Server/Target Server where you put the backup:-
 Login to Shell of the Target Prime and start the nfs,rpcbind
ade # service nfs start
ade # service rpcbind start
 Create the Target locations ( Both Storage and Staging )
mkdir -p /localdisk/staging
mkdir -p /localdisk/storage
 Give Complete permissions to those directories:-
chmod 777 /localdisk/storage
chmod 777 /localdisk/staging
 Create the exports accordingly. Steps to do as below:-
sudo su -
Type Esc + i . Now you will be able to input the text in to the exports file. You can give the permissions as below:- ( 10.106.33.78 is the Active Primary instance)
 Export the locations using the command:-
ade# exportfs -arv
exporting 10.106.33.78:/localdisk/defaultRepo/storage --------à This must result like this as staging and storage must be exported successfully to Prime server for mounting.
[+] Modify the services as below:-
service iptables stop
chkconfig iptables off
service rpcbind restart
 On the Primary server after Successful configuration of the Target server, Enter the shell and type the command:- ( Target server used here is 10.106.33.105 )
ade # rpcinfo -p 10.106.33.105
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
Check whether you are able to see the shared stage and storage folders on the remote NFS server from the Prime Infrastructure server.
If the output of this command does not show the NFS service and its associated ports on the NFS server. you need to restart the NFS service on the Prime Infrastructure server.
 From the admin mode of Prime, Please verify using the command:-
PI33/admin# show repository nfs
Once Everything has been verified, Point the backup trigger to the NFS configured server in the UI Under Administration -> Jobs -> Server backup -> Edit You can Trigger the backup in the target location Successfully.
-bash-4.1# df -h
Filesystem Size Used Avail Use% Mounted on
3.9G 327M 3.4G 9% /
tmpfs 5.9G 1.8G 4.1G 31% /dev/shm
124M 5.6M 113M 5% /altroot
/dev/sda1 485M 41M 419M 9% /boot
124M 6.0M 112M 6% /home
44G 18G 24G 44% /localdisk
203G 57G 136G 30% /opt
124M 5.6M 113M 5% /recovery
/dev/sda2 97M 5.6M 87M 7% /storedconfig
9.7G 151M 9.0G 2% /storeddata
2.0G 42M 1.8G 3% /tmp
6.8G 1.2G 5.3G 18% /usr
3.9G 192M 3.5G 6% /var
10.106.33.105://localdisk/defaultRepo/staging ------------à This mount point was happening since exportfs from NFS server was not working from NFS remote server.
203G 28G 166G 15% /opt/backup
In PI 3.5, we had issue in NFS remote backup of Prime application. But in 3.6, it is not.
So far the only difference between you have laid out here and what I have is I didn't have the backup-staging-url configured any longer. I had removed it, then put it back but no matter what the failure output states it can't mount the staging location. If I cd into the mount point /mnt/repos/ I can access the NFS location, manipulate the files and exit. I can create a backup locally on Prime server and then manually cp or mv the file to the NFS share.
The rpcinfo -p <IPAddress> returns the expected result. NFS client appears to be working properly based on the output of commands and my ability to access the locations. For some reason it won't mount it. I thought it might have been a permissions issues but it doesn't appear to be.
Well no joy on this method. I even waited until we migrated the new PI 3.6 server from its temporary IP to the permanent IP replacing the old 3.1 server. It fails to mount the staging url, fails to mount the repo nfs url but if run locally with the script it will run the backup, copy it to the nfs mount and remove the old file. I checked the permissions and ownership of files and directories but nothing appears to be wrong.