The Network File System (NFS) is an industry standard means of being able to share entire
filesystems among machines within a computer network.
As with the other aspects of networking, the
machines providing the service (in this case the filesystem) are the servers and the machines
utilizing the service are the clients. Files residing physically on the server appear as if they are
local to the client.
This enables file sharing without the hassle of copying the files and worrying
about which one is the more current.
One difference that NFS
have over "conventional" filesystem is that it is possible to
allow access to a portion of a filesystem,
rather than the entire one.
The term exporting is used to describe how NFS
makes local directories available to
remote systems. These directories are then said to be exported. Therefore, an exported
directory is a directory that has been made available for remote access. Sometimes the term
importing is referred to the process of remotely mounting filesystems, although
mounting is more commonly used.
There are a couple of ways you can mount
a remote filesystem.
The first is automatically mounting it when the system boots up. This is done by adding an entry
into /etc/fstab. You could also add a line in some rc script that does a mount command.
If the remote mount
is a one-time deal, the system administrator
can also mount it by hand. Potentially, the administrator could create and entry in /etc/fstab that
does not mount the filesystem at boot
time, but rather is mounted later on. In either event,
the system administrator would use the mount command. If necessary, the system administrator can
also allow users to mount remote filesystems.
machine can also be configured to mount
on an "as-needed" basis, rather than whenever the system boots up. This is through the mechanism of
the automount program. We'll get into a lot of details about how automount works later on.
The syntax for using the mount
command to mount remote file system is basically the same as for local filesystems. The difference
being that you specify the remote host along with the exported path. For
example, if I want to mount the man-pages from jmohr, I could do it like this:
mount -t nfs [-o options] jmohr:/usr/man /usr/man
Here I told the mount
command that I was mounting a filesystem
of type NFS
and that the filesystem was on the machine jmohr under the name /usr/man. I then told it to mount it
onto the local /usr/man directory. There are a couple of things to note here. First, I don't have
to mount the filesystem on the same place as it is exported from. I could have just as easily
exported it to /usr/doc or /usr/local/man. If I want I can include other options like "normal
filesystems" such as read only.
If you are a server, the primary configuration file is /etc/exports, which is a simple ASCII
file and additions or changes can be made with any text
editor. This is a list of the directories that the server is making available for mounting along
with who can mount them and what permissions
they have. In addition, the server needs a way to find the clients address,
therefore mounting will fail if the name cannot be resolved either by DNS
or /etc/hosts. Likewise, the clients depends on name resolution to access the server.
The /etc/exports file has one line for each directory you want to export.
The left side is the path of the directory you want to export and the right side is options
you want to apply. For example, you can limit access to the directory to just one machine or make
the directory read only. On junior, the exports might look like this:
The first line says that I am exporting the /pub directory to the entire world. Since there are
no options, this means that the filesystem is also writable. I wouldn't
normally do this if I were connected to the Internet, even if there wasn't anything sensitive here.
It is a matter of practice, that I know exactly what access I am giving to the world.
The next line says that I am exporting the entire root filesystem
to the machine jmohr. Since this is a development environment,
I have different versions and distributions of Linux on different machines. I often need to have
access to the different files to compare and contrast them. Here, the filesystem is also writable
as I explicitely said rw (for read-write).
The last line takes a little explaining. When I mount
the root filesystem
from jmohr, I mount it onto /usr/jmohr_root, which is the name of the directory that I am
exporting here. This demonstrate the fact that you can export a filesystem to
one machine and then have it re-exported.
Keep in mind, however, that we cannot increase the permission during the re-export. That is, if
the filesystem were originally made read-only, we could not make it writable
when I re-export it. However, if it were writable, I could export it as
Image - An example NFS mount (interactive)
A solution that many systems provide is amd, which is an automatic mounting facility for NFS
filesystems. Once configured, any command or program that accesses a file or directory on the remote
machine within the exported directory forces the mounting to occur. The exported directory remains
mounted until it is no longer needed.
If you can access a filesystem
under Linux, you can access it under NFS.
filesystems) This is because the access to the file is a multi-step process. When you first access a
file (say opening a text file to edit it). The local system first determines
that this is an NFS mounted filesystem. NFS on the local system then goes NFS on the remote system
to get the file. On the remote system, NFS tries to read the file that is physically on the disk. It
is at this point that it needs to go through the filesystem drivers. Therefore, if the filesystem is
supported on the remote system, NFS should have no problem accessing it. Once a filesystem has been
exported, the client sees the filesystem as an NFS filesystem and therefore
what type it is, is really irrelevant.
There are a couple of limitations with NFS.
First, although you might be able to see the device nodes
on a remote machine, you cannot access the remote devices. Think back to the discussion on the
kernel. The device node is a file that is opened by a
device driver to gain access to the physical device. It has a major and
minor number that point to and pass flags to the device driver. If you open up a
device node on a remote system, the major and minor numbers for that device node point to drivers in
the local kernel.
Remotely mounted filesystems present a unique set of problems when dealing with user access
rights. Because it can have adverse effects on your system, it is necessary to have both user and
group ID unique across the entire network. If you don't, access to files and
directories can be limited, or you may end up giving someone access to a file that shouldn't.
Although you could create each user on every system, or copy the passwd files, the most effect
method is using NIS.