NFS Introduction

From about 1997 to 2004(?) I used the Network File System heavily on my home network, very much in the arrangement it was intended for: a file server with three or four attached client machines that remotely mounted provided shares. It worked well, although it wasn't particularly fast (blame 10BaseT wiring, although even 100BaseT is slow). But it's been a decade since I worked with NFS, and even longer since I set up a server, so I'm having to relearn the entire process. I'm working with http://linuxconfig.org/how-to-configure-nfs-on-linux . And, if you're silly enough to try this with TinyCore (as I was), you'll be wanting http://wiki.tinycorelinux.net/wiki:fileserver . The Arch documentation is almost always worth reading: https://wiki.archlinux.org/index.php/NFS . My aim this time is to export a filesystem from a local virtual machine to be mounted by other local virtual machines, so this is meant to be entirely internal. Why bother? Because I'm planning on having more than one virtual machine accessing the file shares, meaning an FS with file locking is a necessity.

Phraseology

The server is the machine that provides a file system. The client is the machine that mounts that file system. Making a file system available is called exporting, although the client is generally considered to mount the FS rather than importing it.

Server Configuration

The main file on the server is /etc/exports, which describes which parts of the file system are to be made available under what circumstances (ie. permissions and availability). Both the following lines should be valid - the first is my simple interpretation to allow one named host read-write access, the second is a TinyCore example:

/home/export deb2(rw,sync) tc63(ro)
/home/nfs    192.168.2.0/255.255.255.0(rw,async,no_subtree_check)

man exports is helpful. If you change /etc/exports after the NFS server is started, run exportfs -ra (man exportfs will help with what that means) to re-export the file systems.

/etc/hosts.allow is currently set to "ALL: ALL" - not a particularly good idea, but okay for experimenting on a private server. (man hosts.allow may be helpful.)

Packages needed vary by distro:

Debian: get "nfs-common" and "nfs-kernel-server". There's probably a way to start the services by hand, but I just rebooted.

TinyCore: you need at least "nfs-utils" and "portmap". portmap on TinyCore requires tcp_wrappers but DOES NOT list it as a dependency, so you have to install it by hand. Install filesystems-\`uname -r\. Start portmap on TC with start-stop-daemon -S -b -x portmap ... start-stop-daemon is a link to busybox (like so many things on TC) and fails SILENTLY if it doesn't find it or tcp_wrappers isn't installed. It may be less "proper," but running portmap seems to work just as well (or just as badly). Finally, run /usr/local/etc/init.d/nfs-server start .

Testing

A useful test on the server: rpcinfo -p | grep nfs to tell you if the service(s) are running (you should see several matching lines on port 2049).

A good test on the client: nc -zv <server-name-or-ip> 111, which uses netcat to tell you if port 111 is reachable on the server.

Client Configuration

Debian: get "nfs-common"

TinyCore: get "nfs-utils" and "portmap". See note above about the "portmap" package on TC.

SliTaz: get "nfs-utils" and "portmap", then start portmap: /etc/init.d/portmap start .

Finally run:

deb2# mount -t nfs nfsserver:/home/export /mnt/tmp

"nfsserver" can be an IP, but the request needs to be from a host that's listed in the server's /etc/hosts.allow (by "ALL: ALL" above) and /etc/exports.

With Debian and TinyCore, it's that simple (although the security needs a boat load of tweaking) ... but the share is mounted read-only.

Apparently SliTaz's mount.nfs (called automatically by mount if the FS type is NFS) is broken ( http://forum.slitaz.org/topic/mount-nfs-shares-at-boot ) so a regular mount command (shown above) won't work, but busybox mount <nfs.server ip addr>:<exported.share> <local.mnt.folder> does. Like Debian and TinyCore, it's mounted RO.

Permissions

Permissions are tricky with NFS. You'll notice above that I'm claiming/complaining that the shares are mounted read-only. This is because I was testing as root - root usually has the most permissive rights on a system, and when in doubt, I test as root before trying to lock security more tightly. But with NFS this can lead you somewhat astray. NFS's default set-up is to do what they refer to as "squash root," which means that any user with root permissions that attempts to use an NFS share will find their permissions translated (without notification, there's no visible warning of this happening on the client side ... perhaps in the logs, which I haven't got to yet) to the anonymous user, which means no write access. In this case, the quick fix is to change the server /etc/exportfs file:

/home/export deb2(rw,sync,no_root_squash)

Note no_root_squash: this allows root users to use the exports with their normal privileges. Read the rest of this section and delete this option later (it's a significant security risk).

You're not out of the woods yet: anything mounted via NFS has permissions based on the PID and GID of the owner on the server. For me, this means that the files on the server are probably owned by "giles", who has the UID of 1000 and the GID of 1000. If you're on a single user system, and both the server and the client have the same Linux distro, there's a fair chance your main username has the same UID and GID and so the client will have the expected permissions. But if you have multi-user systems, or different distros (I'm trying to get TinyCore, Debian, and SliTaz clients to access a Debian server), you have a significant problem. I've solved this in the past by changing the client username PID/GID combo to match those on the server (outside the scope of this little blog entry). The more users or clients you have, the more of a problem this becomes. This is Unix, and there's an answer for that, but it's another layer of abstraction and complexity called "Network Information Service," not something I've ever used. In the context of NFS it can be used to map network users across multiple machines without having to do the UID/GID fiddling I have to do - instead you have to set up and secure a NIS server, and maintain your list of identities for multiple machines on the NIS server.