In this post, I'll explain how to securely configure NFS on Debian, to mount a directory from one server on another machine. A good use case for this is if you have a storage VPS with a large amount of storage, and want to use this space from other servers.

Security

NFS is unencrypted by default. It can be encrypted if you use Kerberos, but I wouldn't recommend going through the pain of configuring Kerberos unless you're setting up a corporate network with dozens of users.

Because of this, I would recommend never exposing an NFS server directly to the internet. I'd also advise against exposing it on "internal" networks which are not isolated per customer, such as what HostHatch provides. On isolated private networks (like what BuyVM provides), it's fine to use NFS unecrypted.

To secure NFS connections over the internet or other untrusted network, I'd recommend using WireGuard. There are various guides on how to configure WireGuard (like this one) so I won't go into it in too much detail. Note that WireGuard does not have the concept of a "client" and "server" like classic VPN solutions like OpenVPN. Each node is a "peer", and the overall topology is up to you. For example, you can have a "mesh" VPN network where every machine can directly access every other machine, without a central server.

On Debian 11 (Bullseye, testing) you can simply use apt install wireguard to get WireGuard. On Debian 10 (Buster), you'll have to enable buster-backports then do apt -t buster-backports install wireguard.

Generate a private and public key on each system:

wg genkey | tee privatekey | wg pubkey > publickey

Then configure /etc/wireguard/wg0.conf on each system. The [Interface] section should have the private key for that particular system. The NFS server should have a [Peer] section for each system that is allowed to access the NFS server, and all the other systems should have a [Peer] section for the NFS server. It should look something like this:

[Interface]
Address = 10.123.0.2
PrivateKey = 12345678912345678912345678912345678912345678
ListenPort = 51820

[Peer]
PublicKey = 987654321987654321987654321987654321987654321
AllowedIPs = 10.123.0.1/32
Endpoint = 198.51.100.1:51820

where 10.123.0.1 and 10.123.0.2 can be any IPs of your choosing, as long as they're in the same subnet and in one of the IP ranges reserved for local networks (10.x.x.x is usually a good choice).

Enable and start the WireGuard service on each machine:

systemctl enable wg-quick@wg0
systemctl start wg-quick@wg0

Run wg to check that it's running. Make sure you can ping the NFS server from the other servers.

NFS Server

On the NFS server, install the nfs-kernel-server package:

apt install nfs-kernel-server

A best practice these days is to only enable NFSv4 unless you really need NFSv3. To only enable NFSv4, set the following variables in /etc/default/nfs-common:

NEED_STATD="no"
NEED_IDMAPD="yes"

And the following in /etc/default/nfs-kernel-server. Note that RPCNFSDOPTS is not present by default, and needs to be added.

RPCNFSDOPTS="-N 2 -N 3 -H 10.123.0.1"
RPCMOUNTDOPTS="--manage-gids -N 2 -N 3"

10.123.0.1 should be the IP address the NFS server will listen on (the WireGuard IP).

Additionally, rpcbind is not needed by NFSv4 but will be started as a prerequisite by nfs-server.service. This can be prevented by masking rpcbind.service and rpcbind.socket:

systemctl mask rpcbind.service
systemctl mask rpcbind.socket

Next, configure your NFS exports in /etc/exports. For example, this will export the /data/hello-world directory and only allow 10.123.0.2 to access it:

/data/hello-world 10.123.0.2(rw,sync,no_subtree_check)

Refer to the exports(5) man page for more details.

Finally, start the NFS server:

systemctl start nfs-server

NFS Client

On the NFS client, you need to install the nfs-common package:

apt install nfs-common

Now, you can use the mount command to mount the directory over NFS:

mkdir -p /mnt/data/
mount -t nfs4 -o vers=4.2,async 10.123.0.1:/data/hello-world /mnt/data/

Try write some files to /mnt/data, and it should work!

To automatically mount the directory on boot, modify /etc/fstab:

10.123.0.1:/data/hello-world /mnt/data nfs4 auto,vers=4.2

Optional: Caching

You can optionally cache data from the NFS server on the local disk by using a transparent read-through cache called CacheFS. The first time files are read via NFS, they will be cached locally. On subsequent reads, if the file has not been modified since the time it was cached, it will be read from the local cache rather than loading over the network. This can provide a significant performance benefit if the NFS server has slower disks and/or is physically distant from the clients.

To enable caching, first install cachefilesd:

apt install cachefilesd

Turn it on by editing /etc/default/cachefilesd, following the instructions in the file:

# You must uncomment the run=yes line below for cachefilesd to start.
# Before doing so, please read /usr/share/doc/cachefilesd/howto.txt.gz as
# extended user attributes need to be enabled on the cache filesystem.
RUN=yes

Modify your NFS mount in /etc/fstab to add the fsc (file system cache) attribute. For example:

10.123.0.1:/data/hello-world /mnt/data nfs4 auto,vers=4.2,fsc

Finally, start the service and remount your directory:

systemctl start cachefilesd
mount -o remount /mnt/data

To check that it's working, read some files from the mount, and you should see /var/cache/fscache/ growing in size:

du -sh /var/cache/fscache/
76K     /var/cache/fscache/

By default, the cache will keep filling up until the disk only has 7% space left. Once the disk drops below 7% free space. If the disk space drops below 3%, caching will be turned off entirely. You can change these thresholds by modifying /etc/cachefilesd.conf.

Short URL for sharing: https://d.sb/B5T. This entry was posted on 31st December 2020 and is filed under Linux. You can leave a comment if you'd like to, or subscribe to the RSS feed to keep up-to-date with all my latest blog posts!

Comments

  1. Avatar for Ben Ben said:

    Hi there,
    Unfortunately, I'm wondering on the following thing:

    After installing cachefilesd and adding "fsc" to /etc/fstab I get to see after doing "mount -o remount /mnt/data":

    "mount.nfs4: an incorrect mount option was specified"

    My /etc/fstab config:
    10.123.0.1:/data/kaiserschmarrn /mnt/data nfs4 auto,vers=4.2,fsc

    The mount works perfectly without the "fsc" option.

    Could you help me with this issue? I'm running a fresh Ubuntu 20.04 LTS installation on the client and Debian 10 on the host & went strictly with your tutorial.

    Best wishes,
    Ben