systemd
daemonjournalctl
: Query the systemd
journaludev
The Network File System (NFS) is a protocol that allows access to files on a server in a manner similar to accessing local files.
openSUSE Leap installs NFS v4.2, which introduces support for sparse files, file pre-allocation, server-side clone and copy, application data block (ADB), and labeled NFS for mandatory access control (MAC) (requires MAC on both client and server).
The Network File System (NFS) is a standardized, well-proven, and widely-supported network protocol that allows files to be shared between separate hosts.
The Network Information Service (NIS) can be used to have centralized user management in the network. Combining NFS and NIS allows using file and directory permissions for access control in the network. NFS with NIS makes a network transparent to the user.
In the default configuration, NFS completely trusts the network and thus any machine that is connected to a trusted network. Any user with administrator privileges on any computer with physical access to any network the NFS server trusts can access any files that the server makes available.
Often, this level of security is perfectly satisfactory, such as when the network that is trusted is truly private, often localized to a single cabinet or machine room, and no unauthorized access is possible. In other cases, the need to trust a whole subnet as a unit is restrictive, and there is a need for more fine-grained trust. To meet the need in these cases, NFS supports various security levels using the Kerberos infrastructure. Kerberos requires NFSv4, which is used by default. For details, see Book “Security and Hardening Guide”, Chapter 7 “Network authentication with Kerberos”.
The following are terms used in the YaST module.
A directory exported by an NFS server, which clients can integrate into their systems.
The NFS client is a system that uses NFS services from an NFS server over the Network File System protocol. The TCP/IP protocol is already integrated into the Linux kernel; there is no need to install any additional software.
The NFS server provides NFS services to clients. A running server depends
on the following daemons: nfsd
(worker), idmapd
(ID-to-name
mapping for NFSv4, needed for certain scenarios only), statd
(file locking), and mountd
(mount requests).
NFSv3 is the version 3 implementation, the “old” stateless NFS that supports client authentication.
NFSv4 is the new version 4 implementation that supports secure user authentication via Kerberos. NFSv4 requires one single port only and thus is better suited for environments behind a firewall than NFSv3.
The protocol is specified as https://datatracker.ietf.org/doc/html/rfc3530.
Parallel NFS, a protocol extension of NFSv4. Any pNFS clients can directly access the data on an NFS server.
In principle, all exports can be made using IP addresses only. To avoid
timeouts, you need a working DNS system. DNS is necessary at least for
logging purposes, because the mountd
daemon does reverse lookups.
The NFS server is not part of the default installation. To install the NFS server using YaST, choose
› , select , and enable the option in the section. Click to install the required packages.Like NIS, NFS is a client/server system. However, a machine can be both—it can supply file systems over the network (export) and mount file systems from other hosts (import).
Mounting NFS volumes locally on the exporting server is not supported on openSUSE Leap.
Configuring an NFS server can be done either through YaST or manually. For authentication, NFS can also be combined with Kerberos.
With YaST, turn a host in your network into an NFS server—a server that exports directories and files to all hosts granted access to it or to all members of a group. Thus, the server can also provide applications without installing the applications locally on every host.
To set up such a server, proceed as follows:
Start YaST and select Figure 22.1, “NFS server configuration tool”. You may be prompted to install additional software.
› ; seeClick the
radio button.
If firewalld
is active on your system, configure it separately for NFS
(see Book “Security and Hardening Guide”, Chapter 24 “Masquerading and firewalls”, Section 24.4 “firewalld
”). YaST does not
yet have complete support for firewalld
, so ignore the "Firewall not
configurable" message and continue.
Check whether you want to Note: NFSv2.
. If you deactivate NFSv4, YaST will only support NFSv3. For information about enabling NFSv2, see
If NFSv4 is selected, additionally enter the appropriate NFSv4 domain
name. This parameter is used by the idmapd
daemon that is required for Kerberos
setups or if clients cannot work with numeric user names. Leave it as
localdomain
(the default) if you do not run
idmapd
or do not have any
special requirements. For more information on the idmapd
daemon, see /etc/idmapd.conf
.
Click
if you need secure access to the server. A prerequisite for this is to have Kerberos installed on your domain and to have both the server and the clients kerberized. Click to proceed with the next configuration dialog.Click
in the upper half of the dialog to export your directory.If you have not configured the allowed hosts already, another dialog for entering the client information and options pops up automatically. Enter the host wild card (usually you can leave the default settings as they are).
There are four possible types of host wild cards that can be set for each
host: a single host (name or IP address), netgroups, wild cards (such as
*
indicating all machines can access the server), and
IP networks.
For more information about these options, see the
exports
man page.
Click
to complete the configuration.
The configuration files for the NFS export service are
/etc/exports
and
/etc/sysconfig/nfs
. In addition to these files,
/etc/idmapd.conf
is needed for the NFSv4 server
configuration with kerberized NFS or if the clients cannot work with
numeric user names.
To start or restart the services, run the command
systemctl restart nfsserver
. This also restarts the
RPC portmapper that is required by the NFS server.
To make sure the NFS server always starts at boot time, run sudo
systemctl enable nfsserver
.
NFSv4 is the latest version of the NFS protocol available on openSUSE Leap. Configuring directories for export with NFSv4 is now the same as with NFSv3.
On openSUSE
prior to Leap, the bind mount in
/etc/exports
was mandatory. It is still supported,
but now deprecated.
/etc/exports
The /etc/exports
file contains a list of entries.
Each entry indicates a directory that is shared and how it is shared. A
typical entry in /etc/exports
consists of:
/SHARED/DIRECTORY HOST(OPTION_LIST)
For example:
/export/data 192.168.1.2(rw,sync)
Here the IP address 192.168.1.2
is used to identify
the allowed client. You can also use the name of the host, a wild card
indicating a set of hosts (*.abc.com
,
*
, etc.), or netgroups
(@my-hosts
).
For a detailed explanation of all options and their meanings, refer to
the man
page of /etc/exports
:
(man exports
).
In case you have modified /etc/exports
while the
NFS server was running, you need to restart it for the changes to become
active: sudo systemctl restart nfsserver
.
/etc/sysconfig/nfs
The /etc/sysconfig/nfs
file contains a few
parameters that determine NFSv4 server daemon behavior. It is important
to set the parameter NFS4_SUPPORT
to
yes
(default). NFS4_SUPPORT
determines whether the NFS server supports NFSv4 exports and clients.
In case you have modified /etc/sysconfig/nfs
while
the NFS server was running, you need to restart it for the changes to
become active: sudo systemctl restart nfsserver
.
On openSUSE prior to Leap, the
--bind
mount in /etc/exports
was
mandatory. It is still supported, but now deprecated. Configuring
directories for export with NFSv4 is now the same as with NFSv3.
If NFS clients still depend on NFSv2, enable it on the server in
/etc/sysconfig/nfs
by setting:
NFSD_OPTIONS="-V2" MOUNTD_OPTIONS="-V2"
After restarting the service, check whether version 2 is available with the command:
>
cat /proc/fs/nfsd/versions
+2 +3 +4 +4.1 +4.2
/etc/idmapd.conf
The idmapd
daemon is only
required if Kerberos authentication is used or if clients cannot work
with numeric user names. Linux clients can work with numeric user names
since Linux kernel 2.6.39. The idmapd
daemon does the name-to-ID mapping
for NFSv4 requests to the server and replies to the client.
If required, idmapd
needs to run on the NFSv4 server. Name-to-ID mapping on the client will
be done by nfsidmap
provided by the package
nfs-client.
Make sure that there is a uniform way in which user names and IDs (UIDs) are assigned to users across machines that might be sharing file systems using NFS. This can be achieved by using NIS, LDAP, or any uniform domain authentication mechanism in your domain.
The parameter Domain
must be set the same for both
client and server in the /etc/idmapd.conf
file. If
you are not sure, leave the domain as localdomain
in
the server and client files. A sample configuration file looks like the
following:
[General] Verbosity = 0 Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = localdomain [Mapping] Nobody-User = nobody Nobody-Group = nobody
To start the idmapd
daemon, run
systemctl start nfs-idmapd
. In case you have modified
/etc/idmapd.conf
while the daemon was running, you
need to restart it for the changes to become active: systemctl
start nfs-idmapd
.
For more information, see the man pages of idmapd
and
idmapd.conf
(man idmapd
and
man idmapd.conf
).
To use Kerberos authentication for NFS, Generic Security Services (GSS) must be enabled. Select
in the initial YaST NFS Server dialog. You must have a working Kerberos server to use this feature. YaST does not set up the server but only uses the provided functionality. To use Kerberos authentication in addition to the YaST configuration, complete at least the following steps before running the NFS configuration:
Make sure that both the server and the client are in the same Kerberos
domain. They must access the same KDC (Key Distribution Center) server
and share their krb5.keytab
file (the default
location on any machine is /etc/krb5.keytab
). For
more information about Kerberos, see
Book “Security and Hardening Guide”, Chapter 7 “Network authentication with Kerberos”.
Start the gssd service on the client with systemctl start
rpc-gssd.service
.
Start the svcgssd service on the server with systemctl start
rpc-svcgssd.service
.
Kerberos authentication also requires the idmapd
daemon to run on the server. For more
information, refer to /etc/idmapd.conf
.
For more information about configuring kerberized NFS, refer to the links in Section 22.6, “More information”.
To configure your host as an NFS client, you do not need to install additional software. All needed packages are installed by default.
Authorized users can mount NFS directories from an NFS server into the local file tree using the YaST NFS client module. Proceed as follows:
Start the YaST NFS client module.
Click
in the tab. Enter the host name of the NFS server, the directory to import, and the mount point at which to mount this directory locally.
When using NFSv4, select localdomain
.
To use Kerberos authentication for NFS, GSS security must be enabled. Select
.Enable
in the tab if you use a firewall and want to allow access to the service from remote computers. The firewall status is displayed next to the check box.Click
to save your changes.
The configuration is written to /etc/fstab
and the
specified file systems are mounted. When you start the YaST configuration
client at a later time, it also reads the existing configuration from this
file.
On (diskless) systems where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.
When shutting down or rebooting the system, the default processing order is to turn off network connections then unmount the root partition. With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already deactivated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Section 13.4.1.2.5, “Activating the network device” and choose in the pane.
The prerequisite for importing file systems manually from an NFS server is
a running RPC port mapper. The nfs
service takes care to
start it properly; thus, start it by entering systemctl start
nfs
as root
. Then
remote file systems can be mounted in the file system just like local
partitions, using the mount
:
>
sudo
mount HOST:REMOTE-PATHLOCAL-PATH
To import user directories from the nfs.example.com
machine, for example, use:
>
sudo
mount nfs.example.com:/home /home
To define a count of TCP connections that the clients make to the NFS
server, you can use the nconnect
option of the
mount
command. You can specify any number between 1 and
16, where 1 is the default value if the mount option has not been specified.
The nconnect
setting is applied only during the first
mount process to the particular NFS server. If the same client executes the
mount command to the same NFS server, all already established connections
will be shared—no new connection will be established. To change the
nconnect
setting, you have to unmount
all client connections to the particular
NFS server. Then you can define a new value for the
nconnect
option.
You can find the value of nconnect
that is in currently
in effect in the output of the mount
, or in the file
/proc/mounts
. If there is no value for the mount
option, then the option has not been used during mounting and the default
value of 1 is in use.
nconnect
As you can close and open connections after the first mount, the actual
count of connections does not necessarily have to be the same as the value
of nconnect
.
The autofs daemon can be used to mount remote file systems automatically.
Add the following entry to the /etc/auto.master
file:
/nfsmounts /etc/auto.nfs
Now the /nfsmounts
directory acts as the root for all
the NFS mounts on the client if the auto.nfs
file is
filled appropriately. The name auto.nfs
is chosen for
the sake of convenience—you can choose any name. In
auto.nfs
add entries for all the NFS mounts as
follows:
localdata -fstype=nfs server1:/data nfs4mount -fstype=nfs4 server2:/
Activate the settings with systemctl start autofs
as
root
. In this example, /nfsmounts/localdata
,
the /data
directory of
server1
, is mounted with NFS and
/nfsmounts/nfs4mount
from
server2
is mounted with NFSv4.
If the /etc/auto.master
file is edited while the
service autofs is running, the automounter must be restarted for the
changes to take effect with systemctl restart autofs
.
/etc/fstab
#Edit source
A typical NFSv3 mount entry in /etc/fstab
looks like
this:
nfs.example.com:/data /local/path nfs rw,noauto 0 0
For NFSv4 mounts, use nfs4
instead of
nfs
in the third column:
nfs.example.com:/data /local/pathv4 nfs4 rw,noauto 0 0
The noauto
option prevents the file system from being
mounted automatically at start-up. If you want to mount the respective
file system manually, it is possible to shorten the mount command
specifying the mount point only:
>
sudo
mount /local/path
If you do not enter the noauto
option, the init
scripts of the system will handle the mount of those file systems at
start-up.
NFS is one of the oldest protocols, developed in the 1980s. As such, NFS is usually sufficient if you want to share small files. However, when you want to transfer big files or many clients want to access data, an NFS server becomes a bottleneck and has a significant impact on the system performance. This is because files are quickly getting bigger, whereas the relative speed of Ethernet has not fully kept pace.
When you request a file from a regular NFS server, the server looks up the file metadata, collects all the data, and transfers it over the network to your client. However, the performance bottleneck becomes apparent no matter how small or big the files are:
With small files, most of the time is spent collecting the metadata.
With big files, most of the time is spent on transferring the data from server to client.
pNFS, or parallel NFS, overcomes this limitation as it separates the file system metadata from the location of the data. As such, pNFS requires two types of servers:
A metadata or control server that handles all the non-data traffic
One or more storage server(s) that hold(s) the data
The metadata and the storage servers form a single, logical NFS server. When a client wants to read or write, the metadata server tells the NFSv4 client which storage server to use to access the file chunks. The client can access the data directly on the server.
openSUSE Leap supports pNFS on the client side only.
Proceed as described in Procedure 22.2, “Importing NFS directories”, but click
the check box and optionally . YaST will do all the necessary steps and will write all
the required options in the file /etc/exports
.
Refer to Section 22.4.2, “Importing file systems manually” to start. Most of the
configuration is done by the NFSv4 server. For pNFS, the only difference
is to add the minorversion
option and the metadata server
MDS_SERVER to your mount
command:
>
sudo
mount -t nfs4 -o minorversion=1 MDS_SERVER MOUNTPOINT
To help with debugging, change the value in the /proc
file system:
>
sudo
echo 32767 > /proc/sys/sunrpc/nfsd_debug>
sudo
echo 32767 > /proc/sys/sunrpc/nfs_debug
There is no single standard for Access Control Lists (ACLs) in Linux beyond
the simple read, write, and execute (rwx
) flags for user,
group, and others (ugo
). One option for finer control is
the Draft POSIX ACLs, which were never formally
standardized by POSIX. Another is the NFSv4 ACLs, which were designed to be
part of the NFSv4 network file system with the goal of making something that
provided reasonable compatibility between POSIX systems on Linux and WIN32
systems on Microsoft Windows.
NFSv4 ACLs are not sufficient to correctly implement Draft POSIX ACLs so no
attempt has been made to map ACL accesses on an NFSv4 client (such as using
setfacl
).
When using NFSv4, Draft POSIX ACLs cannot be used even in emulation and NFSv4
ACLs need to be used directly; that means while setfacl
can work on NFSv3, it cannot work on NFSv4. To allow NFSv4 ACLs to be used on
an NFSv4 file system, SUSE Linux Enterprise Server provides the
nfs4-acl-tools
package, which contains the following:
nfs4-getfacl
nfs4-setfacl
nfs4-editacl
These operate in a generally similar way to getfacl
and
setfacl
for examining and modifying NFSv4 ACLs. These
commands are effective only if the file system on the NFS server provides
full support for NFSv4 ACLs. Any limitation imposed by the server will affect
programs running on the client in that some particular combinations of Access
Control Entries (ACEs) might not be possible.
It is not supported to mount NFS volumes locally on the exporting NFS server.
For information, see Introduction to NFSv4 ACLs at http://wiki.linux-nfs.org/wiki/index.php/ACLs#Introduction_to_NFSv4_ACLs.
In addition to the man pages of exports
,
nfs
, and mount
, information about
configuring an NFS server and client is available in
/usr/share/doc/packages/nfsidmap/README
. For further
documentation online, refer to the following Web sites:
For general information about network security, refer to Book “Security and Hardening Guide”, Chapter 24 “Masquerading and firewalls”.
Refer to Section 23.4, “Auto-mounting an NFS share” if you need to automatically mount NFS exports.
For more details about configuring NFS by using AutoYaST, refer to Book “AutoYaST Guide”, Chapter 4 “Configuration and installation options”, Section 4.19 “NFS client and server”.
For instructions about securing NFS exports with Kerberos, refer to Book “Security and Hardening Guide”, Chapter 7 “Network authentication with Kerberos”, Section 7.6 “Kerberos and NFS”.
Find the detailed technical documentation online at SourceForge.
In some cases, you can understand the problem in your NFS by reading the
error messages produced and looking into the
/var/log/messages
file. However, in many cases,
the information provided by the error messages and in
/var/log/messages
is not detailed enough. In these
cases, most NFS problems can be best understood through capturing network
packets while reproducing the problem.
Clearly define the problem. Examine the problem by testing the system in a variety of ways and determining when the problem occurs. Isolate the simplest steps that lead to the problem. Then try to reproduce the problem as described in the procedure below.
Capture network packets. On Linux, you can use the
tcpdump
command, which is supplied by the
tcpdump package.
An example of tcpdump
syntax follows:
tcpdump -s0 -i eth0 -w /tmp/nfs-demo.cap host x.x.x.x
Where:
Prevents packet truncation
Should be replaced with the name of the local interface which the
packets will pass through. You can use the any
value to capture all interfaces at the same time, but usage of this
attribute often results in inferior data as well as confusion in
analysis.
Designates the name of the capture file to write.
Should be replaced with the IP address of the other end of the NFS
connection. For example, when taking a tcpdump
at
the NFS client side, specify the IP address of the NFS Server, and
vice versa.
In some cases, capturing the data at either the NFS client or NFS server is sufficient. However, in cases where end-to-end network integrity is in doubt, it is often necessary to capture data at both ends.
Do not shut down the tcpdump
process and proceed to
the next step.
(Optional) If the problem occurs during execution of the
nfs mount
command itself, you can try to use the
high-verbosity option (-vvv
) of the
nfs mount
command to get more output.
(Optional) Get an strace
of the reproduction method. An
strace
of reproduction steps records exactly what system
calls were made at exactly what time. This information can be used to
further determine on which events in the tcpdump
you
should focus.
For example, if you found out that executing the command
mycommand --param was failing on an NFS mount, then you
could strace
the command with:
strace -ttf -s128 -o/tmp/nfs-strace.out mycommand --param
In case you do not get any strace
of the reproduction
step, note the time when the problem was reproduced. Check the
/var/log/messages
log file to isolate the problem.
Once the problem has been reproduced, stop tcpdump
running in your terminal by pressing
CTRL–c. If the
strace
command resulted in a hang, also terminate the
strace
command.
An administrator with experience in analyzing packet traces and
strace
data can now inspect data in
/tmp/nfs-demo.cap
and
/tmp/nfs-strace.out
.
Please bear in mind that the following section is intended only for skilled NFS administrators who understand the NFS code. Therefore, perform the first steps described in Section 22.7.1, “Common troubleshooting” to help narrow down the problem and to inform an expert about which areas of debug code (if any) might be needed to learn deeper details.
There are various areas of debug code that can be enabled to gather additional NFS-related information. However, the debug messages are quite cryptic and the volume of them can be so large that the use of debug code can affect system performance. It may even impact the system enough to prevent the problem from occurring. In the majority of cases, the debug code output is not needed, nor is it typically useful to anyone who is not highly familiar with the NFS code.
rpcdebug
#Edit source
The rpcdebug
tool allows you to set and clear NFS
client and server debug flags. In case the rpcdebug
tool
is not accessible in your SLE, you can install it from the package
nfs-client or nfs-kernel-server for
the NFS server.
To set debug flags, run:
rpcdebug -m module -s flags
To clear the debug flags, run:
rpcdebug -m module -c flags
where module can be:
Debug for the NFS server code
Debug for the NFS client code
Debug for the NFS Lock Manager, at either the NFS client or NFS server. This only applies to NFS v2/v3.
Debug for the Remote Procedure Call module, at either the NFS client or NFS server.
For information on detailed usage of the rpcdebug
command, refer to the manual page:
man 8 rpcdebug
NFS activities may depend on other related services, such as the NFS mount
daemon—rpc.mountd
. You can set options for related
services within /etc/sysconfig/nfs
.
For example, /etc/sysconfig/nfs
contains the parameter:
MOUNTD_OPTIONS=""
To enable the debug mode, you have to use the -d
option
followed by any of the values: all
,
auth
, call
, general
,
or parse
.
For example, the following code enables all forms of
rpc.mountd
logging:
MOUNTD_OPTIONS="-d all"
For all available options refer to the manual pages:
man 8 rpc.mountd
After changing /etc/sysconfig/nfs
, services need to be
restarted:
systemctl restart nfsserver # for nfs server related changes systemctl restart nfs # for nfs client related changes