systemd
Daemonjournalctl
: Query the systemd
Journaludev
Distributing and sharing file systems over a network is a common task in corporate environments. The well-proven network file system (NFS) works with NIS, the yellow pages protocol. For a more secure protocol that works with LDAP and Kerberos, check NFSv4. Combined with pNFS, you can eliminate performance bottlenecks.
NFS with NIS makes a network transparent to the user. With NFS, it is possible to distribute arbitrary file systems over the network. With an appropriate setup, users always find themselves in the same environment regardless of the terminal they currently use.
In principle, all exports can be made using IP addresses only. To avoid time-outs, you need a working DNS system. DNS is necessary at least for logging purposes, because the mountd daemon does reverse lookups.
The following are terms used in the YaST module.
A directory exported by an NFS server, which clients can integrate it into their system.
The NFS client is a system that uses NFS services from an NFS server over the Network File System protocol. The TCP/IP protocol is already integrated into the Linux kernel; there is no need to install any additional software.
The NFS server provides NFS services to clients. A running server
depends on the following daemons:
nfsd
(worker),
idmapd
(user and group name
mappings to IDs and vice versa),
statd
(file locking), and
mountd
(mount requests).
NFSv3 is the version 3 implementation, the “old” stateless NFS that supports client authentication.
NFSv4 is the new version 4 implementation that supports secure user authentication via kerberos. NFSv4 requires one single port only and thus is better suited for environments behind a firewall than NFSv3.
The protocol is specified as http://tools.ietf.org/html/rfc3530.
Parallel NFS, a protocol extension of NFSv4. Any pNFS clients can directly access the data on an NFS server.
The NFS server software is not part of the default installation. If you
configure an NFS server as described in
Section 22.3, “Configuring NFS Server” you will automatically
be prompted to install the required packages. Alternatively, install the
package nfs-kernel-server
with
YaST or Zypper.
Like NIS, NFS is a client/server system. However, a machine can be both—it can supply file systems over the network (export) and mount file systems from other hosts (import).
Mounting NFS volumes locally on the exporting server is not supported on SUSE Linux Enterprise systems, as is the case on all Enterprise-class Linux systems.
Configuring an NFS server can be done either through YaST or manually. For authentication, NFS can also be combined with Kerberos.
With YaST, turn a host in your network into an NFS server—a server that exports directories and files to all hosts granted access to it or to all members of a group. Thus, the server can also provide applications without installing the applications locally on every host.
To set up such a server, proceed as follows:
Start YaST and select Figure 22.1, “NFS Server Configuration Tool”. You may be prompted to install additional software.
› ; seeActivate the
radio button.
If a firewall is active on your system (SuSEFirewall2), check
nfs
service.
Check whether you want to
. If you deactivate NFSv4, YaST will only support NFSv3 and NFSv2.If NFSv4 is selected, additionally enter the appropriate NFSv4 domain name.
Make sure the name is the same as the one in the
/etc/idmapd.conf
file of any NFSv4 client that
accesses this particular server. This parameter is for the
idmapd
daemon that is
required for NFSv4 support (on both server and client). Leave it as
localdomain
(the default) if you do not have any
special requirements.
Click
if you need secure access to the server. A prerequisite for this is to have Kerberos installed on your domain and to have both the server and the clients kerberized. Click to proceed with the next configuration dialog.Click
in the upper half of the dialog to export your directory.If you have not configured the allowed hosts already, another dialog for entering the client information and options pops up automatically. Enter the host wild card (usually you can leave the default settings as they are).
There are four possible types of host wild cards that can be set for
each host: a single host (name or IP address), netgroups, wild cards
(such as *
indicating all machines can access the
server), and IP networks.
For more information about these options, see the
exports
man page.
Click
to complete the configuration.
The configuration files for the NFS export service are
/etc/exports
and
/etc/sysconfig/nfs
. In addition to these files,
/etc/idmapd.conf
is needed for the NFSv4 server
configuration. To start or restart the services, run the command
systemctl restart nfsserver
. This also starts
the rpc.idmapd
if NFSv4 is configured in
/etc/sysconfig/nfs
. The NFS server depends on a
running RPC portmapper. Therefore, it also starts or restarts the
portmapper service.
NFSv4 is the latest version of NFS protocol available on openSUSE Leap. Configuring directories for export with NFSv4 is now the same as with NFSv3.
On the previous SUSE Linux Enterprise Server 11 version, the bind mount in
/etc/exports
was mandatory. It is still supported,
but now deprecated.
The /etc/exports
file contains a list of
entries. Each entry indicates a directory that is shared and how it
is shared. A typical entry in /etc/exports
consists of:
/shared/directory host(option_list)
For example:
/export/data 192.168.1.2(rw,sync)
Here the IP address 192.168.1.2
is used to
identify the allowed client. You can also use the name of the host, a
wild card indicating a set of hosts (*.abc.com
,
*
, etc.), or netgroups
(@my-hosts
).
For a detailed explanation of all options and their meaning, refer to
the man page of exports
(man
exports
).
The /etc/sysconfig/nfs
file contains a few
parameters that determine NFSv4 server daemon behavior. It is
important to set the parameter NFS4_SUPPORT
to yes
(default).
NFS4_SUPPORT
determines whether the NFS
server supports NFSv4 exports and clients.
On SUSE Linux Enterprise prior to version 12, the --bind
mount in
/etc/exports
was mandatory. It is still
supported, but now deprecated. Configuring directories for export
with NFSv4 is now the same as with NFSv3.
Every user on a Linux machine has a name and an ID. idmapd does the name-to-ID mapping for NFSv4 requests to the server and replies to the client. It must be running on both server and client for NFSv4, because NFSv4 uses only names for its communication.
Make sure that there is a uniform way in which user names and IDs (uid) are assigned to users across machines that might probably be sharing file systems using NFS. This can be achieved by using NIS, LDAP, or any uniform domain authentication mechanism in your domain.
The parameter Domain
must be set the same for
both, client and server in the /etc/idmapd.conf
file. If you are not sure, leave the domain as
localdomain
in the server and client files. A
sample configuration file looks like the following:
[General] Verbosity = 0 Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = localdomain [Mapping] Nobody-User = nobody Nobody-Group = nobody
For more information, see the man pages of idmapd
and idmapd.conf
(man idmapd
and
man idmapd.conf
).
After changing /etc/exports
or
/etc/sysconfig/nfs
, start or restart the NFS server
service:
systemctl restart nfsserver
After changing /etc/idmapd.conf
, reload the
configuration file:
killall -HUP rpc.idmapd
If the NFS service needs to start at boot time, run:
systemctl enable nfsserver
To use Kerberos authentication for NFS, GSS security must be enabled. Select
in the initial YaST NFS Server dialog. You must have a working Kerberos server to use this feature. YaST does not set up the server but only uses the provided functionality. If you want to use Kerberos authentication in addition to the YaST configuration, complete at least the following steps before running the NFS configuration:
Make sure that both the server and the client are in the same Kerberos
domain. They must access the same KDC (Key Distribution Center) server
and share their krb5.keytab
file (the default
location on any machine is /etc/krb5.keytab
). For
more information about Kerberos, see
Book “Security Guide”, Chapter 7 “Network Authentication with Kerberos”.
Start the gssd service on the client with systemctl start
gssd
.
Start the svcgssd service on the server with systemctl start
svcgssd
.
For more information about configuring kerberized NFS, refer to the links in Section 22.5, “For More Information”.
To configure your host as an NFS client, you do not need to install additional software. All needed packages are installed by default.
Authorized users can mount NFS directories from an NFS server into the local file tree using the YaST NFS client module. Proceed as follows:
Start the YaST NFS client module.
Click
in the tab. Enter the host name of the NFS server, the directory to import, and the mount point at which to mount this directory locally.
When using NFSv4, select localdomain
.
To use Kerberos authentication for NFS, GSS security must be enabled. Select
.Enable
in the tab if you use a Firewall and want to allow access to the service from remote computers. The firewall status is displayed next to the check box.Click
to save your changes.
The configuration is written to /etc/fstab
and the
specified file systems are mounted. When you start the YaST
configuration client at a later time, it also reads the existing
configuration from this file.
On (diskless) systems where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.
When shutting down or rebooting the system, the default processing order is to turn off network connections, then unmount the root partition. With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already not activated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Section 13.4.1.2.5, “Activating the Network Device”, and choose in the pane.
The prerequisite for importing file systems manually from an NFS server is
a running RPC port mapper. The nfs
service takes care to
start it properly; thus, start it by entering systemctl start
nfs
as root
. Then
remote file systems can be mounted in the file system like local
partitions using mount
:
mount host:remote-pathlocal-path
To import user directories from the nfs.example.com
machine, for example, use:
mount nfs.example.com:/home /home
The autofs daemon can be used to mount remote file systems
automatically. Add the following entry to the
/etc/auto.master
file:
/nfsmounts /etc/auto.nfs
Now the /nfsmounts
directory acts as the root for
all the NFS mounts on the client if the auto.nfs
file is filled appropriately. The name auto.nfs
is
chosen for the sake of convenience—you can choose any name. In
auto.nfs
add entries for all the NFS mounts as
follows:
localdata -fstype=nfs server1:/data nfs4mount -fstype=nfs4 server2:/
Activate the settings with systemctl start
autofs
as root
. In this example,
/nfsmounts/localdata
, the
/data
directory of
server1
, is mounted with NFS and
/nfsmounts/nfs4mount
from
server2
is mounted with NFSv4.
If the /etc/auto.master
file is edited while the
service autofs is running, the automounter must be restarted for the
changes to take effect with systemctl restart
autofs
.
/etc/fstab
#
A typical NFSv3 mount entry in /etc/fstab
looks
like this:
nfs.example.com:/data /local/path nfs rw,noauto 0 0
For NFSv4 mounts, use nfs4
instead of
nfs
in the third column:
nfs.example.com:/data /local/pathv4 nfs4 rw,noauto 0 0
The noauto
option prevents the file system from
being mounted automatically at start-up. If you want to mount the
respective file system manually, it is possible to shorten the mount
command specifying the mount point only:
mount /local/path
If you do not enter the noauto
option, the init
scripts of the system will handle the mount of those file systems at
start-up.
NFS is one of the oldest protocols, developed in the '80s. As such, NFS is usually sufficient if you want to share small files. However, when you want to transfer big files or large numbers of clients want to access data, an NFS server becomes a bottleneck and significantly impacts on the system performance. This is because of files quickly getting bigger, whereas the relative speed of your Ethernet has not fully kept up.
When you request a file from a “normal” NFS server, the server looks up the file metadata, collects all the data and transfers it over the network to your client. However, the performance bottleneck becomes apparent no matter how small or big the files are:
With small files most of the time is spent collecting the metadata.
With big files most of the time is spent on transferring the data from server to client.
pNFS, or parallel NFS, overcomes this limitation as it separates the file system metadata from the location of the data. As such, pNFS requires two types of servers:
A metadata or control server that handles all the non-data traffic
One or more storage server(s) that hold(s) the data
The metadata and the storage servers form a single, logical NFS server. When a client wants to read or write, the metadata server tells the NFSv4 client which storage server to use to access the file chunks. The client can access the data directly on the server.
SUSE Linux Enterprise supports pNFS on the client side only.
Proceed as described in Procedure 22.2, “Importing NFS Directories”, but
click the check box and optionally
. YaST will do all the necessary steps
and will write all the required options in the file
/etc/exports
.
Refer to Section 22.4.2, “Importing File Systems Manually” to start. Most of the
configuration is done by the NFSv4 server. For pNFS, the only
difference is to add the minorversion
option and the
metadata server MDS_SERVER to your
mount
command:
mount -t nfs4 -o minorversion=1 MDS_SERVER MOUNTPOINT
To help with debugging, change the value in the
/proc
file system:
echo 32767 > /proc/sys/sunrpc/nfsd_debug echo 32767 > /proc/sys/sunrpc/nfs_debug
In addition to the man pages of exports
,
nfs
, and mount
, information about
configuring an NFS server and client is available in
/usr/share/doc/packages/nfsidmap/README
. For further
documentation online refer to the following Web sites:
Find the detailed technical documentation online at SourceForge.
For instructions for setting up kerberized NFS, refer to NFS Version 4 Open Source Reference Implementation.
If you have questions on NFSv4, refer to the Linux NFSv4 FAQ.