How to Install and Configure an NFS Server on Ubuntu 20.04
Published on
•9 min read
NFS or Network File System is a distributed file system protocol that allows you to share directories over a network. With NFS, you can mount remote directories on your system and work with the files on the remote machine as if they were local files.
By default, the NFS protocol is not encrypted and does not provide user authentication. Access to the server is restricted by the client’s IP addresses or hostnames.
This article explains how to set up an NFSv4 Server on Ubuntu 20.04. We’ll also show you how to mount an NFS file system on the client machine.
Prerequisites
We’ll use two machines, one running Ubuntu 20.04, which will act as an NFS server, and another one running any other Linux distribution on which we will mount the share. The server and the clients should be able to communicate with each other over a private network. You can use public IP addresses and configure the server firewall to allow traffic on port 2049
only from trusted sources.
The machines in this example have the following IPs:
NFS Server IP: 192.168.33.10
NFS Clients IPs: From the 192.168.33.0/24 range
Set Up the NFS Server
The first step is to set up the NFS server. We’ll install the necessary packages, create and export the NFS directories, and configure the firewall.
Installing the NFS server
The NFS server package provides user-space support needed to run the NFS kernel server. To install the package, run:
sudo apt update
sudo apt install nfs-kernel-server
Once the installation is completed, the NFS services will start automatically.
On Ubuntu 20.04, NFS version 2 is disabled. Versions 3 and 4 are enabled. You can verify that by running the following cat
command
:
sudo cat /proc/fs/nfsd/versions
-2 +3 +4 +4.1 +4.2
NFSv2 is pretty old now, and there is no reason to enable it.
NFS server configuration is defined in /etc/default/nfs-kernel-server
and /etc/default/nfs-common
files. The default settings are sufficient for most situations.
Creating the file systems
The NFSv4 server uses a global root directory, and the exported directories are relative to this directory. You can link the share mount point to the directories you want to export using bind mounts.
In this example, we’ll set the /srv/nfs4
directory as NFS root. To better explain how the NFS mounts can be configured, we’re going to share two directories (/var/www
and /opt/backups
) with different configuration settings. The /var/www/
is owned by the user www-data
, and /opt/backups
is owned by root
.
First create the root directory and the share mount points:
sudo mkdir -p /srv/nfs4/backups
sudo mkdir -p /srv/nfs4/www
Bind mount the directories to the share mount points:
sudo mount --bind /opt/backups /srv/nfs4/backups
sudo mount --bind /var/www /srv/nfs4/www
To make the bind mounts permanent across reboots, open the /etc/fstab
file:
sudo nano /etc/fstab
and add the following lines:
/opt/backups /srv/nfs4/backups none bind 0 0
/var/www /srv/nfs4/www none bind 0 0
Exporting the file systems
The next step is to add the file systems that will be exported and the clients allowed to access those shares to the /etc/exports
file.
Each line for an exported file system has the following form:
export host(options)
Where export
is the exported directory, host
is a hostname or IP address/range that can access the export, and options
are the host options.
Open the /etc/exports
file and add the following lines:
sudo nano /etc/exports
/srv/nfs4 192.168.33.0/24(rw,sync,no_subtree_check,crossmnt,fsid=0)
/srv/nfs4/backups 192.168.33.0/24(ro,sync,no_subtree_check) 192.168.33.3(rw,sync,no_subtree_check)
/srv/nfs4/www 192.168.33.20(rw,sync,no_subtree_check)
The first line contains the fsid=0
option, which define the NFS root directory (/srv/nfs4
). Access to this NFS volume is allowed only to the clients from the 192.168.33.0/24
subnet. The crossmnt
option is required to share directories that are sub-directories of an exported directory.
The second line shows how to specify multiple export rules for one filesystem. The read access is allowed to the whole 192.168.33.0/24
range, and both read and write access only to the 192.168.33.3
IP address. The sync
option tells NFS to write changes to the disk before replying.
The last line is self-explanatory. For more information about all the available options type man exports
in your terminal.
Save the file and export the shares:
sudo exportfs -ar
You need to run the command above each time you modify the /etc/exports
file. If there are any errors or warnings, they will be shown on the terminal.
To view the current active exports and their state, use:
sudo exportfs -v
The output will include all shares with their options. As you can see there are also options that we haven’t define in the /etc/exports
file. Those are default options and if you want to change them you’ll need to explicitly set those options.
/srv/nfs4/backups
192.168.33.3(rw,wdelay,root_squash,no_subtree_check,sec=sys,rw,secure,root_squash,no_all_squash)
/srv/nfs4/www 192.168.33.20(rw,wdelay,root_squash,no_subtree_check,sec=sys,rw,secure,root_squash,no_all_squash)
/srv/nfs4 192.168.33.0/24(rw,wdelay,crossmnt,root_squash,no_subtree_check,fsid=0,sec=sys,rw,secure,root_squash,no_all_squash)
/srv/nfs4/backups
192.168.33.0/24(ro,wdelay,root_squash,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash)
On Ubuntu, root_squash
is enabled by default. This is one of the most important options concerning NFS security. It prevents root users connected from the clients from having root privileges on the mounted shares by mapping root UID
and GID
to nobody
/nogroup
UID
/GID
.
In order for the users on the client machines to have access, NFS expects the client’s user and group ID’s to match with those on the server. Another option is to use the NFSv4 idmapping feature that translates user and group IDs to names and the other way around.
That’s it. At this point, you have set up an NFS server on your Ubuntu server. You can now move to the next step and configure the clients and connect to the NFS server.
Firewall configuration
If you are installing Jenkins on a remote Ubuntu server that is protected by a firewall , you’ll need to enable traffic on the NFS port:
sudo ufw allow from 192.168.33.0/24 to any port nfs
Verify the change:
sudo ufw status
The output should show that the traffic on port 2049
is allowed:
To Action From
-- ------ ----
2049 ALLOW 192.168.33.0/24
22/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
Set Up the NFS Clients
Now that the NFS server is set up and shares are exported, the next step is to configure the clients and mount the remote file systems.
We’ll focus on Linux systems, but you can also mount the NFS share on macOS and Windows machines.
Installing the NFS client
On the client machines, we need to install only the tools required to mount a remote NFS file system.
Install NFS client on Debian and Ubuntu
The name of the package that includes programs for mounting NFS file systems on Debian-based distributions is
nfs-common
. To install it, run:sudo apt update
sudo apt install nfs-common
Install NFS client on CentOS and Fedora
On Red Hat and its derivatives, install the
nfs-utils
package:sudo yum install nfs-utils
Mounting file systems
We’ll work on the client machine with IP 192.168.33.20
, which has read and write access to the /srv/nfs4/www
file system and read-only access to the /srv/nfs4/backups
file system.
Create two new directories for the mount points:
sudo mkdir -p /backups
sudo mkdir -p /srv/www
You can create the directories at any location you want.
Mount the exported file systems with the mount
command:
sudo mount -t nfs -o vers=4 192.168.33.10:/backups /backups
sudo mount -t nfs -o vers=4 192.168.33.10:/www /srv/www
Where 192.168.33.10
is the IP of the NFS server. You can also use the hostname instead of the IP address, but it needs to be resolvable by the client machine. This is usually done by mapping the hostname to the IP in the /etc/hosts
file.
When mounting an NFSv4 filesystem, omit the NFS root directory. Use /backups
, instead of /srv/nfs4/backups
.
Verify that the remote file systems are mounted successfully using either the mount or df
command:
df -h
The command will print all mounted file systems. The last two lines are the mounted shares:
Filesystem Size Used Avail Use% Mounted on
udev 951M 0 951M 0% /dev
tmpfs 199M 676K 199M 1% /run
/dev/sda3 124G 2.8G 115G 3% /
tmpfs 994M 0 994M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 994M 0 994M 0% /sys/fs/cgroup
/dev/sda1 456M 197M 226M 47% /boot
tmpfs 199M 0 199M 0% /run/user/1000
192.168.33.10:/backups 124G 2.8G 115G 3% /backups
192.168.33.10:/www 124G 2.8G 115G 3% /srv/www
To make the mounts permanent on reboot, open the /etc/fstab
file and add the following lines::
sudo nano /etc/fstab
192.168.33.10:/backups /backups nfs defaults,timeo=900,retrans=5,_netdev 0 0
192.168.33.10:/www /srv/www nfs defaults,timeo=900,retrans=5,_netdev 0 0
For information about the available options when mounting an NFS file system, type man nfs
in your terminal.
Another option to mount remote file systems is to use either the autofs
tool or to create a systemd unit.
Testing NFS Access
Let’s test the access to the shares by creating a new file on each of them.
First, try to create a test file to the /backups
directory using the touch
command:
sudo touch /backups/test.txt
The /backup
file system is exported as read-only and as expected you will see a Permission denied
error message:
touch: cannot touch ‘/backups/test’: Permission denied
Next, try to create a test file to the /srv/www
directory as a root using the sudo
command:
sudo touch /srv/www/test.txt
Again, you will see Permission denied
message.
touch: cannot touch ‘/srv/www’: Permission denied
If you recall, the /var/www
directory is owned
by the www-data
user, and this share has root_squash
option set which maps the root user to the nobody
user and nogroup
group that doesn’t have write permissions to the remote share.
Assuming that you have a www-data
use on the client machine with the same UID
and GID
as on the remote server (which should be the case if, for example, you installed nginx
on both machines), you can try to create a file as user www-data
:
sudo -u www-data touch /srv/www/test.txt
The command will show no output which means the file was successfully created.
To verify it list the files in the /srv/www
directory:
ls -la /srv/www
The output should show the newly created file:
drwxr-xr-x 3 www-data www-data 4096 Apr 10 22:18 .
drwxr-xr-x 3 root root 4096 Apr 10 22:29 ..
-rw-r--r-- 1 www-data www-data 0 Apr 10 21:58 index.html
-rw-r--r-- 1 www-data www-data 0 Apr 10 22:18 test.txt
Unmounting NFS File System
If the remote NFS share is no longer needed, you can unmount it as any other mounted file system using the umount
command.
For example, to unmount the /backup
share, you would run:
sudo umount /backups
If the mount point is defined in the /etc/fstab
file, make sure you remove the line or comment it out by adding #
at the beginning of the line.
Conclusion
We have shown you how to set up an NFS server and how to mount the remote file systems on the client machines. If you’re implementing NFS in production and sharing sensible data, it is a good idea to enable kerberos authentication.
As an alternative to NFS, you can use SSHFS to mount remote directories over an SSH connection. SSHFS is encrypted by default and much easier to configure and use.
Feel free to leave a comment if you have any questions.