Saturday, April 27, 2024
 Popular · Latest · Hot · Upcoming
3
rated 0 times [  3] [ 0]  / answers: 1 / hits: 26326  / 2 Years ago, thu, september 15, 2022, 11:16:11

I have multiple Ubuntu servers, recently I've installed a few 11.04 servers (and 1 desktop) and I've just found that upon rebooting the nfs mounts will not mount.



I've tried upgrading nfs-common to the latest version (I'm only one small revision behind) but that just slightly changes my errors. All of these servers having the issues are clones (vmWare) from a server template I made awhile back, so I thought maybe it was an issue with the template and therefore all of its clones. I then tried the same mount on the Desktop 11.04 but I had the same issues. About half the time I will be able to press "S" to skip but the other half of the time the server freezes (and I restore from a recent snapshot). Also whats odd is that if I am able to get into the system then I can do a "mount -a" just fine and it will mount everything. This makes me think the issue is that nfs isn't waiting for a network to be present in order to try mounting. Something else that makes me think this is that I get a "unable to resolve host" (to an NFS point) error, even though that host is in /etc/hosts.



Here is my /var/log/boot.log



fsck from util-linux-ng 2.17.2
fsck from util-linux-ng 2.17.2
/dev/sda1 was not cleanly unmounted, check forced.
/dev/mapper/php53x-root: clean, 75641/1032192 files, 492673/4126720 blocks (check in 5 mounts)
init: portmap-wait (statd) main process (373) killed by TERM signal
init: statd main process (402) terminated with status 1
init: statd main process ended, respawning
init: statd-mounting main process (355) killed by TERM signal
mount.nfs: Failed to resolve server NFSSERVER-priv: Name or service not known
init: statd-mounting main process (416) killed by TERM signal
mount.nfs: Failed to resolve server NFSSERVER-priv: Name or service not known
init: statd main process (435) terminated with status 1
init: statd main process ended, respawning
init: statd main process (459) terminated with status 1
init: statd main process ended, respawning
mountall: mount /var/www [410] terminated with status 32
mountall: mount /var/users [436] terminated with status 32
init: statd-mounting main process (448) killed by TERM signal
init: statd main process (468) terminated with status 1
init: statd main process ended, respawning
init: statd main process (498) terminated with status 1
init: statd main process ended, respawning
/dev/sda1: 226/124496 files (1.3% non-contiguous), 39133/248832 blocks
mountall: fsck /boot [268] terminated with status 1
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
mountall: mount /var/users [583] terminated with status 32
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
mountall: mount /var/www [575] terminated with status 32
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
mountall: mount /var/www [638] terminated with status 32
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
mountall: mount /var/users [645] terminated with status 32
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
mountall: mount /var/www [724] terminated with status 32
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
mountall: mount /var/users [729] terminated with status 32 Skipping /var/www at user request
* Starting AppArmor profiles [80G [74G[ OK ]
* Starting Name Service Cache Daemon nscd [80G [74G[ OK ]
FATAL: Module vmhgfs not found.
FATAL: Module vmsync not found.
FATAL: Module vmblock not found.
* Loading open-vm-tools modules [80G [74G[ OK ]
* Starting open-vm daemon vmtoolsd [80G [74G[ OK ]


Sorry for the long post, just wanted to convey as much information as possible. Does anyone have any suggestions on this? I've been googling all day and I have tried things with _netdev and well as changing the configuration for statd but nothing has worked. I have 6 servers this is effecting. :



/etc/fstab: (problem lines only - removing these will allow the rest of nfs to mount)



NFSSERVER-priv:/vol/vol1_isp/eshowcase/sites      /var/www       nfs     ro,defaults        0       0
NFSSERVER-priv:/vol/vol1_isp/vusers /var/users nfs defaults 0 0


/etc/hosts (relevant entry):



10.1.1.43 NFSSERVER-priv

More From » server

 Answers
2

Here's what I did as a work around in case anyone else runs into this problem and comes looking for the solution here:



Created a script (mountall.sh) in /etc/init.d/:



#!/bin/bash

mount -r NFSSERVER-priv:/vol/vol1_isp/eshowcase/sites /var/www
mount NFSSERVER-priv:/vol/vol1_isp/vusers /var/users


Make the system aware of the new script:



update-rc.d mountall.sh defaults


The option “defaults” puts a link to start mountall.sh in run levels 2, 3, 4 and 5. (and puts a link to stop mountall.sh into 0, 1 and 6.)



Chmod the file to be executable



chmod +x mountall.sh


Now when you init 6 you should have your mount points. Also a good idea to make a "comment" in your fstab so people know where everything is actually being mounted from as that will be the first place they'll look.


[#42224] Thursday, September 15, 2022, 2 Years  [reply] [flag answer]
Only authorized users can answer the question. Please sign in first, or register a free account.
velelf

Total Points: 395
Total Questions: 115
Total Answers: 107

Location: Sudan
Member since Mon, Jun 1, 2020
4 Years ago
;