• nfs resources stopped mount during start of system

    After some last update (I suppose systemd) my nfs resources stopped mount automatically during start of system. I didn’t change configuration of systemd and didn’t modify my /etc/fstab file:

    I have configured 11 resources to mount (all present in the same server). The entries in /etc/fstab look like below:

    servername:/path/to/resource /path/in/local/disk nfs rsize=8192,wsize=8192,timeo=14,intr,nosuid

    I tried also with option: “vers=4”, but without success.

    When I try to mount them manually it works well. I just run following command: mount -a -t nfs and I have all resources mounted.
    I can’t find any error message in logs. There are only messages about registering nfs service.
    Seems that “.mount” files are generated properly and are placed in this location: /run/systemd/generator. Every looks similar to below:

    # Automatically generated by systemd-fstab-generator
    Documentation=man:fstab(5) man:systemd-fstab-generator(8)

    I also tried to use automount suggested here: https://wiki.archlinux.org/index.php/NFS, so I used following options: “noauto,x-systemd.automount,x-systemd.device-timeout=10,timeo=14,x-systemd.idle-timeout=1min 0 0”
    In result I got not all mounted resources. I mean only 8 mounted from 11. I think this method is
    not reliable / doesn’t work well. Or something is not well configured.

    Seems that command from generated “.mount” file is performed. I checked status this unit running following command:
    `sudo systemctl status path-in-local-disk.mount.mount

    And I received following message:

    ● path-in-local-disk.mount - /path/in/local/disk
       Loaded: loaded (/etc/fstab; generated; vendor preset: disabled)
       Active: failed (Result: exit-code) since Tue 2017-08-22 01:00:56 CEST; 2min 41s ago
        Where: /path/in/local/disk
         What: servername:/path/to/resource
         Docs: man:fstab(5)
      Process: 509 ExecMount=/usr/bin/mount servername:/path/to/resource /path/in/local/disk -t nfs -o rsize=8192,wsize=8192,timeo=14,intr,nosuid (code=exited, status=32)
    sie 22 01:00:56 SkyLake-i7 systemd[1]: Mounting /path/in/local/disk...
    sie 22 01:00:56 SkyLake-i7 systemd[1]: path-in-local-disk.mount: Mount process exited, code=exited status=32
    sie 22 01:00:56 SkyLake-i7 systemd[1]: Failed to mount /path/in/local/disk.
    sie 22 01:00:56 SkyLake-i7 systemd[1]: home-piotra-Wideo-filmy.mount: Unit entered failed state.

    In manual (man mount) I found that status = 32 means: “32 mount failure”.
    Interesting is that manually mounting returns success (code zero):

    $ mount servername:/path/to/resource /path/in/local/disk -t nfs -o rsize=8192,wsize=8192,timeo=14,intr,nosuid
    $ echo $?

    System parameters:

    • systemd version: 234.11-8,
    • kernel version: 4.12.8-2-ARCH,
    • nfs-utils version: 2.1.1-4

    System starts from SSD drive.
    I connect with network by wire.

  • I having the exact same issue since the last update. whatever what I doing on my fstab didn’t work neither trying using the version 4 like you did.

  • I found some workaround or solution.

    One of my resource I tried to mount using the systemd automount service, so I used following options in /etc/fstab:
    noauto,x-systemd.automount,x-systemd.device-timeout=10,timeo=14,x-systemd.idle-timeout=1min 0 0
    all others looks like:
    servername:/path/to/resourceN /path/in/local/diskN nfs rsize=8192,wsize=8192,timeo=14,intr,nosuid

    Unfortunately without success, so I tried to change the options for not mounting (even by automount) resource to “timeo=14,rw,intr”, so my entry now looks like:
    servername:/path/to/resource1 /path/in/local/disk1 nfs timeo=14,rw,intr

    And guess what? Magically all resources started mount. I’m not sure why before all worked with the same options and now I needed to change one of them. Additionally I can’t remember if I something changed for resource which I wasn’t able mount (from /etc/fstab)

    Anyway I encourage to tries with different options for one or more entries in /etc/fstab

  • Reinstall ZFS

    sudo pacman -S zfs

    Reboot, its back.

    There is something fishy with Antergos ZFS thees days…

Posts 4Views 760
Log in to reply
Bloom Email Optin Plugin

Looks like your connection to Antergos Community Forum was lost, please wait while we try to reconnect.