systemd
Daemonjournalctl
: Query the systemd
Journaludev
systemd
Daemonjournalctl
: Query the systemd
Journaludev
/dev
Directoryuevents
and udev
udev
Daemonudev
Rulesudev
systemd
Target Unitsulimit
: Setting Resources for the Userrpm -q -i wget
/etc/resolv.conf
/etc/hosts
/etc/networks
/etc/host.conf
/etc/nsswitch.conf
udev
Rulesrpcclient
to Request a Windows Server 2012 Share SnapshotVirtualHost
EntriesVirtualHost
DirectivesVirtualHost
DirectivesVirtualHost
ConfigurationCopyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE or Novell trademarks, see the Novell Trademark and Service Mark list http://www.novell.com/company/legal/trademarks/tmlist.html. All other third party trademarks are the property of their respective owners. A trademark symbol (®, ™ etc.) denotes a SUSE or Novell trademark; an asterisk (*) denotes a third party trademark.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.
This manual gives you a general understanding of openSUSE® Leap. It is intended mainly for system administrators and home users with basic system administration knowledge. Check out the various parts of this manual for a selection of applications needed in everyday life and in-depth descriptions of advanced installation and configuration scenarios.
Learn about advanced adminstrations tasks such as using YaST in text mode and managing software from the command line. Find out how to do system roll-backs with Snapper and how to use advanced storage techniques on openSUSE Leap.
Get an introduction to the components of your Linux system and a deeper understanding of their interaction.
Learn how to configure the various network and file services that come with openSUSE Leap.
Get an introduction to mobile computing with openSUSE Leap, get to know the various options for wireless computing and power management.
Many chapters in this manual contain links to additional documentation resources. These include additional documentation that is available on the system, as well as documentation available on the Internet.
For an overview of the documentation available for your product and the latest documentation updates, refer to http://doc.opensuse.org/ or to the following section.
We provide HTML and PDF versions of our books in different languages. The following manuals for users and administrators are available for this product:
This manual will see you through your initial contact with openSUSE® Leap. Check out the various parts of this manual to learn how to install, use and enjoy your system.
Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.
Describes virtualization technology in general, and introduces libvirt—the unified interface to virtualization—and detailed information on specific hypervisors.
AutoYaST is a system for installing one or more openSUSE Leap systems automatically and without user intervention, using an AutoYaST profile that contains installation and configuration data. The manual guides you through the basic steps of auto-installation: preparation, installation, and configuration.
Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.
An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.
Introduces the GNOME desktop of openSUSE Leap. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.
Find HTML versions of most product manuals in your installed system under
/usr/share/doc/manual
or in the help centers of your
desktop. Find the latest documentation updates at http://doc.opensuse.org/ where you
can download PDF or HTML versions of the manuals for your product.
Several feedback channels are available:
For services and support options available for your product, refer to http://www.suse.com/support/.
To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click .
We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/doc/feedback.html and enter your comments there.
For feedback on the documentation of this product, you can also send a
mail to doc-team@suse.de
. Make sure to include the
document title, the product version and the publication date of the
documentation. To report errors or suggest enhancements, provide a
concise description of the problem and refer to the respective section
number and page (or URL).
The following typographical conventions are used in this manual:
/etc/passwd
: directory names and file names
placeholder: replace placeholder with the actual value
PATH
: the environment variable PATH
ls
, --help
: commands, options, and
parameters
user
: users or groups
Alt, Alt–F1: a key to press or a key combination; keys are shown in uppercase as on a keyboard
, › : menu items, buttons
Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.
This documentation is written in SUSEDoc, a subset of DocBook 5. The XML source files were validated by
jing
(see https://code.google.com/p/jing-trang/),
processed by xsltproc
, and converted into XSL-FO
using a customized version of Norman Walsh's stylesheets. The final
PDF is formatted through FOP from Apache Software Foundation. The open source
tools and the environment used to build this documentation are
provided by the DocBook Authoring and Publishing Suite (DAPS). The
project's home page can be found at https://github.com/openSUSE/daps.
The XML source code of this documentation can be found at https://github.com/SUSE/doc-sle.
The source code of openSUSE Leap is publicly available. Refer to http://en.opensuse.org/Source_code for download links and more information.
With a lot of voluntary commitment, the developers of Linux cooperate on a global scale to promote the development of Linux. We thank them for their efforts—this distribution would not exist without them. Special thanks, of course, goes to Linus Torvalds.
This section is intended for system administrators and experts who do not run an X server on their systems and depend on the text-based installation tool. It provides basic information about starting and operating YaST in text mode.
This chapter describes Zypper and RPM, two command line tools for
managing software. For a definition of the terminology used in this
context (for example, repository
,
patch
, or update
) refer to
Book “Start-Up”, Chapter 9 “Installing or Removing Software”, Section 9.1 “Definition of Terms”.
Being able to do file system snapshots providing the ability to do
rollbacks on Linux is a feature that was often requested in the past.
Snapper, in conjunction with the Btrfs
file system or
thin-provisioned LVM volumes now fills that gap.
Btrfs
, a new copy-on-write file system for Linux,
supports file system snapshots (a copy of the state of a subvolume at a
certain point of time) of subvolumes (one or more separately mountable
file systems within each physical partition). Snapshots are also
supported on thin-provisioned LVM volumes formatted with XFS, Ext4 or
Ext3. Snapper lets you create and manage these snapshots. It comes with
a command line and a YaST interface. Starting with SUSE Linux Enterprise Server 12 it
is also possible to boot from Btrfs
snapshots—see Section 3.3, “System Rollback by Booting from Snapshots” for
more information.
Virtual Network Computing (VNC) enables you to control a remote computer via a graphical desktop (as opposed to a remote shell access). VNC is platform-independent and lets you access the remote machine from any operating system.
openSUSE Leap supports two different kinds of VNC sessions: One-time sessions that “live” as long as the VNC connection from the client is kept up, and persistent sessions that “live” until they are explicitly terminated.
Sophisticated system configurations require specific disk setups. All common partitioning tasks can be done with YaST. To get persistent device naming with block devices, use the block devices below /dev/disk/by-id or /dev/disk/by-uuid. Logical Volume Management (LVM) is a disk partitioning scheme t…
openSUSE Leap supports the parallel installation of multiple kernel versions. When installing a second kernel, a boot entry and an initrd are automatically created, so no further manual configuration is needed. When rebooting the machine, the newly added kernel is available as an additional boot option.
Using this functionality, you can safely test kernel updates while being able to always fall back to the proven former kernel. To do so, do not use the update tools (such as the YaST Online Update or the updater applet), but instead follow the process described in this chapter.
This chapter introduces GNOME configuration options which administrators can use to adjust system-wide settings, such as customizing menus, installing themes, configuring fonts, changing preferred applications, and locking down capabilities.
This section is intended for system administrators and experts who do not run an X server on their systems and depend on the text-based installation tool. It provides basic information about starting and operating YaST in text mode.
YaST in text mode uses the ncurses library to provide an easy pseudo-graphical user interface. The ncurses library is installed by default. The minimum supported size of the terminal emulator in which to run YaST is 80x25 characters.
When you start YaST in text mode, the YaST control center appears (see Figure 1.1). The main window consists of three areas. The left frame features the categories to which the various modules belong. This frame is active when YaST is started and therefore it is marked by a bold white border. The active category is highlighted. The right frame provides an overview of the modules available in the active category. The bottom frame contains the buttons for and .
When you start the YaST control center, the category ↓ and ↑ to change the category. To select a module from the category, activate the right frame with → and then use ↓ and ↑ to select the module. Keep the arrow keys pressed to scroll through the list of available modules. The selected module is highlighted. Press Enter to start the active module.
is selected automatically. UseVarious buttons or selection fields in the module contain a highlighted letter (yellow by default). Use Alt–highlighted_letter to select a button directly instead of navigating there with →|. Exit the YaST control center by pressing Alt–Q or by selecting and pressing Enter.
If a YaST dialog gets corrupted or distorted (for example, while resizing the window), press Ctrl–L to refresh and restore its contents.
If your window manager uses global Alt combinations, the Alt combinations in YaST might not work. Keys like Alt or Shift can also be occupied by the settings of the terminal.
Alt shortcuts can be executed with Esc instead of Alt. For example, Esc–H replaces Alt–H. (First press Esc, then press H.)
If the Alt and Shift combinations are occupied by the window manager or the terminal, use the combinations Ctrl–F (forward) and Ctrl–B (backward) instead.
The F keys are also used for functions. Certain function keys might be occupied by the terminal and may not be available for YaST. However, the Alt key combinations and function keys should always be fully available on a pure text console.
Besides the text mode interface, YaST provides a pure command line interface. To get a list of YaST command line options, enter:
yast -h
To save time, the individual YaST modules can be started directly. To start a module, enter:
yast <module_name>
View a list of all module names available on your system with
yast -l
or yast --list
. Start the
network module, for example, with yast lan
.
If you know a package name and the package is provided by any of your
active installation repositories, you can use the command line option
-i
to install the package:
yast -i <package_name>
or
yast --install <package_name>
package_name can be a single short package
name, for example gvim
, which
is installed with dependency checking, or the full path to an RPM
package, which is installed without dependency checking.
If you need a command line based software management utility with functionality beyond what YaST provides, consider using Zypper. This utility uses the same software management library that is also the foundation for the YaST package manager. The basic usage of Zypper is covered in Section 2.1, “Using Zypper”.
To use YaST functionality in scripts, YaST provides command line support for individual modules. Not all modules have command line support. To display the available options of a module, enter:
yast <module_name> help
If a module does not provide command line support, the module is started in text mode and the following message appears:
This YaST module does not support the command line interface.
This chapter describes Zypper and RPM, two command line tools for
managing software. For a definition of the terminology used in this
context (for example, repository
,
patch
, or update
) refer to
Book “Start-Up”, Chapter 9 “Installing or Removing Software”, Section 9.1 “Definition of Terms”.
Zypper is a command line package manager for installing, updating and removing packages a well as for managing repositories. It is especially useful for accomplishing remote software management tasks or managing software from shell scripts.
The general syntax of Zypper is:
zypper[--global-options]
command[--command-options]
[arguments]
...
The components enclosed in brackets are not required. See zypper
help
for a list of general options and all commands. To get
help for a specific command, type zypper help
command.
The simplest way to execute Zypper is to type its name, followed by a command. For example, to apply all needed patches to the system type:
zypper patch
Additionally, you can choose from one or more global options by typing
them immediately before the command. For example,
--non-interactive
means running the command without
asking anything (automatically applying the default answers):
zypper --non-interactive patch
To use the options specific to a particular command, type them right
after the command. For example,
--auto-agree-with-licenses
means applying all needed
patches to the system without asking to confirm any licenses (they will
automatically be accepted):
zypper patch --auto-agree-with-licenses
Some commands require one or more arguments. When using the install command, for example, you need to specify which package(s) to install:
zypper install mplayer
Some options also require an argument. The following command will list all known patterns:
zypper search -t pattern
You can combine all of the above. For example, the following command will
install the aspell-de
and
aspell-fr
packages from the
factory
repository while being verbose:
zypper -v install --from factory aspell-de aspell-fr
The --from
option makes sure to keep all repositories
enabled (for solving any dependencies) while requesting the package from
the specified repository.
Most Zypper commands have a dry-run
option that does a
simulation of the given command. It can be used for test purposes.
zypper remove --dry-run MozillaFirefox
Zypper supports the global --userdata
string
option. You can specify a
string with this option, which gets written to Zypper's log files and
plug-ins (such as the Btrfs plug-in). It can be used to mark and identify
transactions in log files.
zypper --userdata string patch
To install or remove packages use the following commands:
zypper install package_name zypper remove package_name
Zypper knows various ways to address packages for the install and remove commands.
zypper install MozillaFirefox
or
zypper install MozillaFirefox-3.5.3
zypper install mozilla:MozillaFirefox
Where mozilla
is the alias of the repository from
which to install.
The following command will install all packages that have names starting with “Moz”. Use with care, especially when removing packages.
zypper install 'Moz*'
-debuginfo
Packages
When debugging a problem, you sometimes need to temporarily install a lot of -debuginfo
packages which give you more information about running processes.
After your debugging session finishes and you need to clean the environment, run the following:
zypper remove '*-debuginfo'
For example, if you want to install a Perl module without knowing the name of the package, capabilities come in handy:
zypper install firefox
Together with a capability you can specify an architecture (such as
x86_64
) and/or a version. The version must be
preceded by an operator: <
(lesser than),
<=
(lesser than or equal),
=
(equal), >=
(greater
than or equal), >
(greater than).
zypper install 'firefox.x86_64' zypper install 'firefox>=3.5.3' zypper install 'firefox.x86_64>=3.5.3'
You can also specify a local or remote path to a package:
zypper install /tmp/install/MozillaFirefox.rpm zypper install http://download.opensuse.org/repositories/mozilla/SUSE_Factory/x86_64/MozillaFirefox-3.5.3-1.3.x86_64.rpm
To install and remove packages simultaneously use the
+/-
modifiers. To install
emacs
and remove vim
simultaneously, use:
zypper install emacs -vim
To remove emacs
and install
vim
simultaneously, use:
zypper remove emacs +vim
To prevent the package name starting with the -
being
interpreted as a command option, always use it as the second argument. If
this is not possible, precede it with --
:
zypper install -emacs +vim # Wrong zypper install vim -emacs # Correct zypper install -- -emacs +vim # same as above zypper remove emacs +vim # same as above
If (together with a certain package) you automatically want to remove any
packages that become unneeded after removing the specified package, use
the --clean-deps
option:
zypper rm package_name --clean-deps
By default, Zypper asks for a confirmation before installing or removing
a selected package, or when a problem occurs. You can override this
behavior using the --non-interactive
option. This option
must be given before the actual command (install
,
remove
, and patch
) as in the
following:
zypper --non-interactive
install package_name
This option allows the use of Zypper in scripts and cron jobs.
Do not remove packages such as glibc
,
zypper
, kernel
, or
similar packages. These packages are mandatory for the system and, if
removed, may cause the system to become unstable or stop working
altogether.
If you want to install the corresponding source package of a package, use:
zypper source-install package_name
That command will also install the build dependencies of the specified
package. If you do not want this, add the switch -D
.
To install only the build dependencies use -d
.
zypper source-install -D package_name # source package only zypper source-install -d package_name # build dependencies only
Of course, this will only work if you have the repository with the source packages enabled in your repository list (it is added by default, but not enabled). See Section 2.1.5, “Managing Repositories with Zypper” for details on repository management.
A list of all source packages available in your repositories can be obtained with:
zypper search -t srcpackage
You can also download source packages for all installed packages to a local directory. To download source packages, use:
zypper source-download
The default download directory is
/var/cache/zypper/source-download
. You can change
it using the --directory
option. To only show missing
or extraneous packages without downloading or deleting anything, use the
--status
option. To delete extraneous source packages,
use the --delete
option. To disable deleting, use the
--no-delete
option.
Normally you can only install packages from enabled repositories. The
--plus-content tag
option
helps you specify repositories to be refreshed, temporarily enabled during
the current Zypper session, and disabled after it completes.
For example, to enable repositories that may provide additional
-debuginfo
or -debugsource
packages, use --plus-content debug
. You can specify this
option multiple times.
To temporarily enable such 'debug' repositories to install a
specific -debuginfo
package, use the option as
follows:
zypper --plus-content debug install "debuginfo(build-id)=eb844a5c20c70a59fc693cd1061f851fb7d046f4"
The build-id
string is reported by
gdb
for missing debuginfo packages.
To verify whether all dependencies are still fulfilled and to repair missing dependencies, use:
zypper verify
In addition to dependencies that must be fulfilled, some packages “recommend” other packages. These recommended packages are only installed if actually available and installable. In case recommended packages were made available after the recommending package has been installed (by adding additional packages or hardware), use the following command:
zypper install-new-recommends
This command is very useful after plugging in a webcam or Wi-Fi device. It will install drivers for the device and related software, if available. Drivers and related software are only installable if certain hardware dependencies are fulfilled.
There are three different ways to update software using Zypper: by
installing patches, by installing a new version of a package or by updating
the entire distribution. The latter is achieved with zypper
dist-upgrade
. Upgrading openSUSE Leap is discussed in
Book “Start-Up”, Chapter 12 “Upgrading the System and System Changes”.
To install all officially released patches applying to your system, run:
zypper patch
In this case, all patches available in your repositories are checked for relevance and installed, if necessary. After registering your openSUSE Leap installation, an official update repository containing such patches will be added to your system. The above command is all you need to enter to apply them when needed.
If a patch to be installed includes changes that require a system reboot, you will be warned before installing the patch.
Zypper knows three different commands to query for the availability of patches:
zypper patch-check
Lists the number of needed patches (patches, that apply to your system but are not yet installed)
tux >
sudo zypper patch-check
Loading repository data...
Reading installed packages...
5 patches needed (1 security patch)
zypper list-patches
Lists all needed patches (patches, that apply to your system but are not yet installed)
tux >
sudo zypper list-patches
Loading repository data...
Reading installed packages...
Repository | Name | Version | Category | Status | Summary
---------------+-------------+---------+----------+---------+---------
SLES12-Updates | SUSE-2014-8 | 1 | security | needed | openssl: Update to OpenSSL 1.0.1g
zypper patches
Lists all patches available for openSUSE Leap, regardless of whether they are already installed or apply to your installation.
It is also possible to list and install patches relevant to specific
issues. To list specific patches, use the zypper
list-patches
command with the following options:
--bugzilla[=number]
Lists all needed patches for Bugzilla issues. Optionally, you can specify a bug number if you only want to list patches for this specific bug.
--cve[=number]
Lists all needed patches for CVE (Common Vulnerabilities and Exposures) issues, or only patches matching a certain CVE number, if specified.
zypper list-patches --cve
Lists all 'needed' patches with a CVE number assigned.
zypper list-patches --all --cve Issue | No. | Patch | Category | Severity | Status ------+---------------+-------------------+-------------+-----------+---------- cve | CVE-2015-0287 | SUSE-SLE-Module.. | recommended | moderate | needed cve | CVE-2014-3566 | SUSE-SLE-SERVER.. | recommended | moderate | not needed [...]
Lists all patches with a CVE number assigned.
zypper list-patches --all --cve=CVE-2015-4477,CVE-2014-3639 Issue | No. | Patch | Category | Severity | Inter. | Status ------+---------------+----------+-------------+-----------+--------+----------- cve | CVE-2014-3639 | 2014-558 | security | moderate | reboot | not needed cve | CVE-2014-3639 | 2014-558 | security | moderate | reboot | not needed cve | CVE-2015-4477 | 2015-547 | security | important | --- | needed cve | CVE-2014-3639 | 3240 | recommended | moderate | --- | not needed cve | CVE-2014-3639 | 3240 | recommended | moderate | --- | not needed
Lists 'all' patches with CVE-2015-4477 or CVE-2014-3639.
To install a patch for a specific Bugzilla or CVE issue, use the following commands:
zypper patch --bugzilla=number
or
zypper patch --cve=number
For example, to install a security patch with the CVE number
CVE-2010-2713
, execute:
zypper patch --cve=CVE-2010-2713
If a repository contains only new packages, but does not provide
patches, zypper patch
does not show any
effect. To update all installed packages with newer available versions
(while maintaining system integrity), use:
zypper update
To update individual packages, specify the package with either the update or install command:
zypper update package_name zypper install package_name
A list of all new installable packages can be obtained with the command:
zypper list-updates
Note that this command only lists packages that match the following criteria:
has the same vendor like the already installed package,
is provided by repositories with at least the same priority than the already installed package,
is installable (all dependencies are satisfied).
A list of all new available packages (regardless whether installable or not) can be obtained with:
zypper list-updates --all
To find out why a new package cannot be installed, use the
zypper install
or zypper update
command as described above.
Whenever you remove a repository from zypper or upgrade your system, some packages can get in an “orphaned” state. These orphaned packages belong to no active repository anymore. The following command gives you a list of these:
zypper packages --orphaned
With this list, you can decide if a package is still needed or can be deinstalled safely.
When patching, updating or removing packages, there may be running
processes on the system which continue to use files having been deleted by
the update or removal. Use zypper ps
to show a list of
processes using deleted files. In case the process belongs to a known
service, the service name is listed, making it easy to restart the
service. By default zypper ps
shows a table:
PID | PPID | UID | User | Command | Service | Files ------+------+-----+-------+--------------+--------------+------------------- 814 | 1 | 481 | avahi | avahi-daemon | avahi-daemon | /lib64/ld-2.19.s-> | | | | | | /lib64/libdl-2.1-> | | | | | | /lib64/libpthrea-> | | | | | | /lib64/libc-2.19-> [...]
PID: ID of the process |
PPID: ID of the parent process |
UID: ID of the user running the process |
Login: Login name of the user running the process |
Command: Command used to executethe process |
Service: Service name (only if command is associated with a system service) |
Files: The list of the deleted files |
The output format of zypper ps
can be
controlled as follows:
zypper ps
-s
Create a short table not showing the deleted files.
PID | PPID | UID | User | Command | Service ------+------+------+---------+--------------+-------------- 814 | 1 | 481 | avahi | avahi-daemon | avahi-daemon 817 | 1 | 0 | root | irqbalance | irqbalance 1567 | 1 | 0 | root | sshd | sshd 1761 | 1 | 0 | root | master | postfix 1764 | 1761 | 51 | postfix | pickup | postfix 1765 | 1761 | 51 | postfix | qmgr | postfix 2031 | 2027 | 1000 | tux | bash |
zypper ps
-ss
Show only processes associated with a system service.
PID | PPID | UID | User | Command | Service ------+------+------+---------+--------------+-------------- 814 | 1 | 481 | avahi | avahi-daemon | avahi-daemon 817 | 1 | 0 | root | irqbalance | irqbalance 1567 | 1 | 0 | root | sshd | sshd 1761 | 1 | 0 | root | master | postfix 1764 | 1761 | 51 | postfix | pickup | postfix 1765 | 1761 | 51 | postfix | qmgr | postfix
zypper ps
-sss
Only show system services using deleted files.
avahi-daemon irqbalance postfix sshd
zypper ps
--print "systemctl status
%s"
Show the commands to retrieve status information for services which might need a restart.
systemctl status avahi-daemon systemctl status irqbalance systemctl status postfix systemctl status sshd
For more information about service handling refer to Chapter 10, The systemd
Daemon.
All installation or patch commands of Zypper rely on a list of known repositories. To list all repositories known to the system, use the command:
zypper repos
The result will look similar to the following output:
# | Alias | Name | Enabled | Refresh --+--------------+---------------+---------+-------- 1 | SLEHA-12-GEO | SLEHA-12-GEO | Yes | No 2 | SLEHA-12 | SLEHA-12 | Yes | No 3 | SLES12 | SLES12 | Yes | No
When specifying repositories in various commands, an alias, URI or
repository number from the zypper repos
command output
can be used. A repository alias is a short version of the repository name
for use in repository handling commands. Note that the repository numbers
can change after modifying the list of repositories. The alias will never
change by itself.
By default, details such as the URI or the priority of the repository are not displayed. Use the following command to list all details:
zypper repos -d
To add a repository, run
zypper addrepo URI alias
URI can either be an Internet repository, a network resource, a directory or a CD or DVD (see http://en.opensuse.org/openSUSE:Libzypp_URIs for details). The alias is a shorthand and unique identifier of the repository. You can freely choose it, with the only exception that it needs to be unique. Zypper will issue a warning if you specify an alias that is already in use.
If you want to remove a repository from the list, use the command
zypper removerepo
together with the alias or number
of the repository you want to delete. For example, to remove the
repository SLEHA-12-GEO
from
Example 2.1, “Zypper—List of Known Repositories”, use one of the following commands:
zypper removerepo 1 zypper removerepo "SLEHA-12-GEO"
Enable or disable repositories with zypper
modifyrepo
. You can also alter the repository's properties
(such as refreshing behavior, name or priority) with this command. The
following command will enable the repository named
updates
, turn on auto-refresh and set its priority to
20:
zypper modifyrepo -er -p 20 'updates'
Modifying repositories is not limited to a single repository—you can also operate on groups:
-a : all repositories |
-l : local repositories |
-t : remote repositories |
-m TYPE : repositories
of a certain type (where TYPE can be one of the
following: http , https , ftp ,
cd , dvd , dir , file ,
cifs , smb , nfs , hd ,
iso ) |
To rename a repository alias, use the renamerepo
command. The following example changes the alias from Mozilla
Firefox
to firefox
:
zypper renamerepo 'Mozilla Firefox' firefox
Zypper offers various methods to query repositories or packages. To get lists of all products, patterns, packages or patches available, use the following commands:
zypper products zypper patterns zypper packages zypper patches
To query all repositories for certain packages, use
search
. It works on package names, or, optionally, on
package summaries and descriptions. String wrapped in
/
are interpreted as regular expressions. By default,
the search is not case-sensitive.
fire
zypper search "fire"
MozillaFirefox
zypper search --match-exact "MozillaFirefox"
zypper search -d fire
zypper search -u fire
fir
not
followed be e
zypper se "/fir[^e]/"
To search for packages which provide a special capability, use the
command what-provides
. For example, if you want to
know which package provides the Perl module SVN::Core
,
use the following command:
zypper what-provides 'perl(SVN::Core)'
To query single packages, use info
with an exact
package name as an argument. It displays detailed information about a
package. To also show what is required/recommended by the package, use
the options --requires
and
--recommends
:
zypper info --requires MozillaFirefox
The what-provides package
is similar to rpm -q --whatprovides
package, but RPM is only able to query the RPM
database (that is the database of all installed packages). Zypper, on the
other hand, will tell you about providers of the capability from any
repository, not only those that are installed.
Zypper now comes with a configuration file, allowing you to permanently
change Zypper's behavior (either system-wide or user-specific). For
system-wide changes, edit /etc/zypp/zypper.conf
. For
user-specific changes, edit ~/.zypper.conf
. If
~/.zypper.conf
does not yet exist, you can use
/etc/zypp/zypper.conf
as a template: copy it to
~/.zypper.conf
and adjust it to your liking. Refer
to the comments in the file for help about the available options.
In case you have problems to access packages from configured repositories (for example, Zypper cannot find a certain package though you know that it exists in one the repositories), it can help to refresh the repositories with:
zypper refresh
If that does not help, try
zypper refresh -fdb
This forces a complete refresh and rebuild of the database, including a forced download of raw metadata.
If the Btrfs file system is used on the root partition and
snapper
is installed, Zypper automatically calls
snapper
(via script installed by
snapper
) when committing changes to the file system to
create appropriate file system snapshots. These snapshots can be used for
reverting any changes made by Zypper. See Chapter 3, System Recovery and Snapshot Management with Snapper
for more information.
For more information on managing software from the command line, enter
zypper help
, zypper help
command or refer to the
zypper(8)
man page. For a complete and detailed
command reference, including cheat sheets
with the
most important commands, and information on how to use Zypper in scripts
and applications, refer to
http://en.opensuse.org/SDB:Zypper_usage. A list of
software changes for the latest openSUSE Leap version can be found at
http://en.opensuse.org/openSUSE:Zypper versions.
RPM (RPM Package Manager) is used for managing software packages. Its main
commands are rpm
and rpmbuild
. The
powerful RPM database can be queried by the users, system administrators
and package builders for detailed information about the installed
software.
Essentially, rpm
has five modes: installing,
uninstalling (or updating) software packages, rebuilding the RPM database,
querying RPM bases or individual RPM archives, integrity checking of
packages and signing packages. rpmbuild
can be used to
build installable packages from pristine sources.
Installable RPM archives are packed in a special binary format. These
archives consist of the program files to install and certain meta
information used during the installation by rpm
to
configure the software package or stored in the RPM database for
documentation purposes. RPM archives normally have the extension
.rpm
.
For several packages, the components needed for software development
(libraries, headers, include files, etc.) have been put into separate
packages. These development packages are only needed if you want to
compile software yourself (for example, the most recent GNOME packages).
They can be identified by the name extension -devel
,
such as the packages alsa-devel
and gimp-devel
.
RPM packages have a GPG signature. To verify the signature of an RPM
package, use the command rpm --checksig
package-1.2.3.rpm to determine whether the
package originates from SUSE or from another trustworthy facility.
This is especially recommended for update packages from the Internet.
While fixing issues in the operating system, you might need to install a Problem Temporary Fix (PTF) into a production system. The packages provided by SUSE are signed against a special PTF key. However, in contrast to SUSE Linux Enterprise 11, this key is not imported by default on SUSE Linux Enterprise 12 systems. To manually import the key, use the following command:
rpm --import /usr/share/doc/packages/suse-build-key/suse_ptf_key.asc
After importing the key, you can install PTF packages on your system.
Normally, the installation of an RPM archive is quite simple:
rpm -i
package.rpm. With
this command the package is installed, but only if its dependencies are
fulfilled and if there are no conflicts with other packages. With an
error message, rpm
requests those packages that need
to be installed to meet dependency requirements. In the background, the
RPM database ensures that no conflicts arise—a specific file can
only belong to one package. By choosing different options, you can force
rpm
to ignore these defaults, but this is only for
experts. Otherwise, you risk compromising the integrity of the system and
possibly jeopardize the ability to update the system.
The options -U
or --upgrade
and
-F
or --freshen
can be used to update a
package (for example, rpm -F
package.rpm). This command removes the files
of the old version and immediately installs the new files. The difference
between the two versions is that -U
installs packages
that previously did not exist in the system, but -F
merely updates previously installed packages. When updating,
rpm
updates configuration files carefully using the
following strategy:
If a configuration file was not changed by the system administrator,
rpm
installs the new version of the appropriate
file. No action by the system administrator is required.
If a configuration file was changed by the system administrator before
the update, rpm
saves the changed file with the
extension .rpmorig
or
.rpmsave
(backup file) and installs the version
from the new package (but only if the originally installed file and the
newer version are different). If this is the case, compare the backup
file (.rpmorig
or .rpmsave
)
with the newly installed file and make your changes again in the new
file. Afterwards, be sure to delete all .rpmorig
and .rpmsave
files to avoid problems with future
updates.
.rpmnew
files appear if the configuration file
already exists and if the
noreplace
label was specified in the
.spec
file.
Following an update, .rpmsave
and
.rpmnew
files should be removed after comparing
them, so they do not obstruct future updates. The
.rpmorig
extension is assigned if the file has not
previously been recognized by the RPM database.
Otherwise, .rpmsave
is used. In other words,
.rpmorig
results from updating from a foreign format
to RPM. .rpmsave
results from updating from an older
RPM to a newer RPM. .rpmnew
does not disclose any
information to whether the system administrator has made any changes
to the configuration file. A list of these files is available in
/var/adm/rpmconfigcheck
. Some configuration files
(like /etc/httpd/httpd.conf
) are not overwritten to
allow continued operation.
The -U
switch is not just an
equivalent to uninstalling with the -e
option and
installing with the -i
option. Use -U
whenever possible.
To remove a package, enter rpm -e
package. This command only deletes the package
if there are no unresolved dependencies. It is theoretically impossible
to delete Tcl/Tk, for example, as long as another application requires
it. Even in this case, RPM calls for assistance from the database. If
such a deletion is, for whatever reason, impossible (even if
no additional dependencies exist), it may be helpful
to rebuild the RPM database using the option
--rebuilddb
.
Delta RPM packages contain the difference between an old and a new version of an RPM package. Applying a delta RPM onto an old RPM results in a completely new RPM. It is not necessary to have a copy of the old RPM because a delta RPM can also work with an installed RPM. The delta RPM packages are even smaller in size than patch RPMs, which is an advantage when transferring update packages over the Internet. The drawback is that update operations with delta RPMs involved consume considerably more CPU cycles than plain or patch RPMs.
The makedeltarpm
and applydelta
binaries are part of the delta RPM suite (package
deltarpm
) and help you create and apply delta
RPM packages. With the following commands, you can create a delta RPM
called new.delta.rpm
. The following command assumes
that old.rpm
and new.rpm
are
present:
makedeltarpm old.rpm new.rpm new.delta.rpm
Using applydeltarpm
, you can reconstruct the new RPM
from the file system if the old package is already installed:
applydeltarpm new.delta.rpm new.rpm
To derive it from the old RPM without accessing the file system, use the
-r
option:
applydeltarpm -r old.rpm new.delta.rpm new.rpm
See /usr/share/doc/packages/deltarpm/README
for
technical details.
With the -q
option rpm
initiates
queries, making it possible to inspect an RPM archive (by adding the
option -p
) and to query the RPM database of
installed packages. Several switches are available to specify the type of
information required. See Table 2.1, “The Most Important RPM Query Options”.
|
Package information |
|
File list |
|
Query the package that contains the file FILE (the full path must be specified with FILE) |
|
File list with status information (implies |
|
List only documentation files (implies |
|
List only configuration files (implies |
|
File list with complete details (to be used with
|
|
List features of the package that another package can request with
|
|
Capabilities the package requires |
|
Installation scripts (preinstall, postinstall, uninstall) |
For example, the command rpm -q -i wget
displays the
information shown in Example 2.2, “rpm -q -i wget
”.
rpm -q -i wget
#Name : wget Relocations: (not relocatable) Version : 1.11.4 Vendor: openSUSE Release : 1.70 Build Date: Sat 01 Aug 2009 09:49:48 CEST Install Date: Thu 06 Aug 2009 14:53:24 CEST Build Host: build18 Group : Productivity/Networking/Web/Utilities Source RPM: wget-1.11.4-1.70.src.rpm Size : 1525431 License: GPL v3 or later Signature : RSA/8, Sat 01 Aug 2009 09:50:04 CEST, Key ID b88b2fd43dbdc284 Packager : http://bugs.opensuse.org URL : http://www.gnu.org/software/wget/ Summary : A Tool for Mirroring FTP and HTTP Servers Description : Wget enables you to retrieve WWW documents or FTP files from a server. This can be done in script files or via the command line. [...]
The option -f
only works if you specify the complete
file name with its full path. Provide as many file names as desired. For
example, the following command
rpm -q -f /bin/rpm /usr/bin/wget
results in:
rpm-4.8.0-4.3.x86_64 wget-1.11.4-11.18.x86_64
If only part of the file name is known, use a shell script as shown in Example 2.3, “Script to Search for Packages”. Pass the partial file name to the script shown as a parameter when running it.
#! /bin/sh for i in $(rpm -q -a -l | grep $1); do echo "\"$i\" is in package:" rpm -q -f $i echo "" done
The command rpm -q --changelog
package displays a detailed list of change
information about a specific package, sorted by date.
With the help of the installed RPM database, verification checks can be
made. Initiate these with -V
, or
--verify
. With this option, rpm
shows
all files in a package that have been changed since installation.
rpm
uses eight character symbols to give some hints
about the following changes:
|
MD5 check sum |
|
File size |
|
Symbolic link |
|
Modification time |
|
Major and minor device numbers |
|
Owner |
|
Group |
|
Mode (permissions and file type) |
In the case of configuration files, the letter c
is
printed. For example, for changes to /etc/wgetrc
(wget
package):
rpm -V wget S.5....T c /etc/wgetrc
The files of the RPM database are placed in
/var/lib/rpm
. If the partition
/usr
has a size of 1 GB, this database can
occupy nearly 30 MB, especially after a complete update. If the
database is much larger than expected, it is useful to rebuild the
database with the option --rebuilddb
. Before doing this,
make a backup of the old database. The cron
script
cron.daily
makes daily copies of the database (packed
with gzip) and stores them in /var/adm/backup/rpmdb
.
The number of copies is controlled by the variable
MAX_RPMDB_BACKUPS
(default: 5
)
in /etc/sysconfig/backup
. The size of a single
backup is approximately 1 MB for 1 GB in
/usr
.
All source packages carry a .src.rpm
extension
(source RPM).
Source packages can be copied from the installation medium to the hard
disk and unpacked with YaST. They are not, however, marked as
installed ([i]
) in the package manager. This is
because the source packages are not entered in the RPM database. Only
installed operating system software is listed in
the RPM database. When you “install” a source package, only
the source code is added to the system.
The following directories must be available for rpm
and rpmbuild
in /usr/src/packages
(unless you specified custom settings in a file like
/etc/rpmrc
):
SOURCES
for the original sources (.tar.bz2
or
.tar.gz
files, etc.) and for
distribution-specific adjustments (mostly .diff
or .patch
files)
SPECS
for the .spec
files, similar to a meta Makefile,
which control the build process
BUILD
all the sources are unpacked, patched and compiled in this directory
RPMS
where the completed binary packages are stored
SRPMS
here are the source RPMs
When you install a source package with YaST, all the necessary
components are installed in /usr/src/packages
: the
sources and the adjustments in SOURCES
and the
relevant .spec
file in SPECS
.
Do not experiment with system components
(glibc
,
rpm
, etc.), because this
endangers the stability of your system.
The following example uses the wget.src.rpm
package.
After installing the source package, you should have files similar to
those in the following list:
/usr/src/packages/SOURCES/wget-1.11.4.tar.bz2 /usr/src/packages/SOURCES/wgetrc.patch /usr/src/packages/SPECS/wget.spec
rpmbuild -b
X
/usr/src/packages/SPECS/wget.spec starts the compilation.
X is a wild card for various stages of the
build process (see the output of --help
or the RPM
documentation for details). The following is merely a brief explanation:
-bp
Prepare sources in /usr/src/packages/BUILD
:
unpack and patch.
-bc
Do the same as -bp
, but with additional compilation.
-bi
Do the same as -bp
, but with additional installation
of the built software. Caution: if the package does not support the
BuildRoot feature, you might overwrite configuration files.
-bb
Do the same as -bi
, but with the additional creation
of the binary package. If the compile was successful, the binary
should be in /usr/src/packages/RPMS
.
-ba
Do the same as -bb
, but with the additional creation
of the source RPM. If the compilation was successful, the binary
should be in /usr/src/packages/SRPMS
.
--short-circuit
Skip some steps.
The binary RPM created can now be installed with rpm
-i
or, preferably, with rpm
-U
. Installation with rpm
makes it
appear in the RPM database.
The danger with many packages is that unwanted files are added to the
running system during the build process. To prevent this use
build
, which creates a defined environment in
which the package is built. To establish this chroot environment, the
build
script must be provided with a complete package
tree. This tree can be made available on the hard disk, via NFS, or from
DVD. Set the position with build --rpms
directory. Unlike rpm
, the
build
command looks for the .spec
file in the source directory. To build wget
(like in
the above example) with the DVD mounted in the system under
/media/dvd
, use the following commands as
root
:
cd /usr/src/packages/SOURCES/ mv ../SPECS/wget.spec . build --rpms /media/dvd/suse/ wget.spec
Subsequently, a minimum environment is established at
/var/tmp/build-root
. The package is built in this
environment. Upon completion, the resulting packages are located in
/var/tmp/build-root/usr/src/packages/RPMS
.
The build
script offers several additional
options. For example, cause the script to prefer your own RPMs, omit the
initialization of the build environment or limit the
rpm
command to one of the above-mentioned stages.
Access additional information with build
--help
and by reading the build
man
page.
Midnight Commander (mc
) can display the contents of
RPM archives and copy parts of them. It represents archives as virtual
file systems, offering all usual menu options of Midnight Commander.
Display the HEADER
with F3. View
the archive structure with the cursor keys and
Enter. Copy archive components with
F5.
A full-featured package manager is available as a YaST module. For details, see Book “Start-Up”, Chapter 9 “Installing or Removing Software”.
Being able to do file system snapshots providing the ability to do
rollbacks on Linux is a feature that was often requested in the past.
Snapper, in conjunction with the Btrfs
file system or
thin-provisioned LVM volumes now fills that gap.
Btrfs
, a new copy-on-write file system for Linux,
supports file system snapshots (a copy of the state of a subvolume at a
certain point of time) of subvolumes (one or more separately mountable
file systems within each physical partition). Snapshots are also
supported on thin-provisioned LVM volumes formatted with XFS, Ext4 or
Ext3. Snapper lets you create and manage these snapshots. It comes with
a command line and a YaST interface. Starting with SUSE Linux Enterprise Server 12 it
is also possible to boot from Btrfs
snapshots—see Section 3.3, “System Rollback by Booting from Snapshots” for
more information.
Using Snapper you can perform the following tasks:
Undo system changes made by zypper
and YaST. See
Section 3.2, “Using Snapper to Undo Changes” for details.
Restore files from previous snapshots. See Section 3.2.2, “Using Snapper to Restore Files” for details.
Do a system rollback by booting from a snapshot. See Section 3.3, “System Rollback by Booting from Snapshots” for details.
Manually create snapshots on the fly and manage existing snapshots. See Section 3.5, “Manually Creating and Managing Snapshots” for details.
Snapper on openSUSE Leap is set up to serve as an “undo and
recovery tool” for system changes. By default, the root partition
(/
) of openSUSE Leap is formatted with
Btrfs
. Taking snapshots is automatically enabled if
the root partition (/
) is big enough (approximately
more than 8GB). Taking snapshots on partitions other than
/
is not enabled by default.
When a snapshot is created, both the snapshot and the original point to
the same blocks in the file system. So, initially a snapshot does not
occupy additional disk space. If data in the original file system is
modified, changed data blocks are copied while the old data blocks are
kept for the snapshot. Therefore, a snapshot occupies the same amount of
space as the data modified. So, over time, the amount of space a snapshot
allocates, constantly grows. As a consequence, deleting files from a
Btrfs
file system containing snapshots may
not free disk space!
Snapshots always reside on the same partition or subvolume on which the snapshot has been taken. It is not possible to store snapshots on a different partition or subvolume.
As a result, partitions containing snapshots need to be larger than “normal” partitions. The exact amount strongly depends on the number of snapshots you keep and the amount of data modifications. As a rule of thumb you should consider using twice the size than you normally would.
Although snapshots themselves do not differ in a technical sense, we distinguish between three types of snapshots, based on the occasion on which they were taken:
A single snapshot is created every hour. Old snapshots are automatically deleted. By default, the first snapshot of the last ten days, months, and years are kept. Timeline snapshots are enabled by default, except for the root partition.
Whenever one or more packages are installed with YaST or Zypper,
a pair of snapshots is created: one before the installation starts
(“Pre”) and another one after the installation has
finished (“Post”). In case an important system component
such as the kernel has been installed, the snapshot pair is marked as
important (important=yes
). Old snapshots are
automatically deleted. By default the last ten important snapshots and
the last ten “regular” (including administration
snapshots) snapshots are kept. Installation snapshots are enabled by
default.
Whenever you administrate the system with YaST, a pair of snapshots is created: one when a YaST module is started (“Pre”) and another when the module is closed (“Post”). Old snapshots are automatically deleted. By default the last ten important snapshots and the last ten “regular” snapshots (including installation snapshots) are kept. Administration snapshots are enabled by default.
Some directories need to be excluded from snapshots for different reasons. The following list shows all directories that are excluded:
/boot/grub2/i386-pc
, /boot/grub2/x86_64-efi
, ,
A rollback of the boot loader configuration is not supported. The directories listed above are architecture-specific. The first two directories are present on x86_64 machines, the latter two on IBM POWER and on IBM z Systems, respectively.
/home
If /home
does not reside on a separate partition,
it is excluded to avoid data loss on rollbacks.
/opt
, /var/opt
Third-party products usually get installed to
/opt
. It is excluded to avoid uninstalling these
applications on rollbacks.
/srv
Contains data for Web and FTP servers. It is excluded to avoid data loss on rollbacks.
/tmp
, /var/tmp
,
/var/crash
All directories containing temporary files are excluded from snapshots.
/usr/local
This directory is used when manually installing software. It is excluded to avoid uninstalling these installations on rollbacks.
/var/lib/named
Contains zone data for the DNS server. Excluded from snapshots to ensure a name server can operate after a rollback.
/var/lib/mailman
, /var/spool
Directories containing mails or mail queues are excluded to avoid a loss of mails after a rollback.
/var/lib/pgqsl
Contains PostgreSQL data.
/var/log
Log file location. Excluded from snapshots to allow log file analysis after the rollback of a broken system.
openSUSE Leap comes with a reasonable default setup, which should be sufficient for most use cases. However, all aspects of taking automatic snapshots and snapshot keeping can be configured according to your needs.
Each of the three snapshot types (timeline, installation, administration) can be enabled or disabled independently.
Enabling.
snapper
-c root
set-config
"TIMELINE_CREATE=yes"
Disabling.
snapper
-c root
set-config
"TIMELINE_CREATE=no"
Timeline snapshots are enabled by default, except for the root partition.
Enabling:
Install the package
snapper-zypp-plugin
Disabling:
Uninstall the package
snapper-zypp-plugin
Installation snapshots are enabled by default.
Enabling:
Set USE_SNAPPER
to yes
in
/etc/sysconfig/yast2
.
Disabling:
Set USE_SNAPPER
to no
in
/etc/sysconfig/yast2
.
Administration snapshots are enabled by default.
Taking snapshot pairs upon installing packages with YaST or Zypper
is handled by the
snapper-zypp-plugin
. An XML
configuration file, /etc/snapper/zypp-plugin.conf
defines, when to make snapshots. By default the file looks like the
following:
1 <?xml version="1.0" encoding="utf-8"?> 2 <snapper-zypp-plugin-conf> 3 <solvables> 4 <solvable match="w"1 important="true"2>kernel-*3</solvable> 5 <solvable match="w" important="true">dracut</solvable> 6 <solvable match="w" important="true">glibc</solvable> 7 <solvable match="w" important="true">systemd*</solvable> 8 <solvable match="w" important="true">udev</solvable> 9 <solvable match="w">*</solvable>4 10 </solvables> 11 </snapper-zypp-plugin-conf>
The match attribute defines whether the pattern is a Unix shell-style
wild card ( | |
If the given pattern matches and the corresponding package is marked as important (for example Kernel packages), the snapshot will also be marked as important. | |
Pattern to match a package name. Based on the setting of the
| |
This line unconditionally matches all packages. |
With this configuration snapshot, pairs are made whenever a package is installed (line 9). When Kernel, dracut, glibc, systemd, or udev packages marked as important are installed, the snapshot pair will also be marked as important (lines 4 to 8). All rules are evaluated.
To disable a rule, either delete it or deactivate it using XML comments. To prevent the system from making snapshot pairs for every package installation for example, comment line 9:
1 <?xml version="1.0" encoding="utf-8"?> 2 <snapper-zypp-plugin-conf> 3 <solvables> 4 <solvable match="w" important="true">kernel-*</solvable> 5 <solvable match="w" important="true">dracut</solvable> 6 <solvable match="w" important="true">glibc</solvable> 7 <solvable match="w" important="true">systemd*</solvable> 8 <solvable match="w" important="true">udev</solvable> 9 <!-- <solvable match="w">*</solvable> --> 10 </solvables> 11 </snapper-zypp-plugin-conf>
Creating a new subvolume underneath the /
-hierarchy
and permanently mounting it is supported. However, you need to make sure
not to create it inside a snapshot, since you would not be able to delete
snapshots anymore after a rollback.
openSUSE Leap is configured with the /@/
subvolume
which serves as an independent root for permanent subvolumes such as
/opt
, /srv
,
/home
and others. Any new subvolumes you create and
permanently mount need to be created in this initial root file system.
To do so, run the following commands. In this example, a new subvolume
/usr/important
is created from
/dev/sda2
.
mount /dev/sda2 -o subvol=@ /mnt btrfs subvolume create /mnt/usr/important umount /mnt
The corresponding entry in /etc/fstab
needs to look
like the following:
/dev/sda2 /usr/important btrfs subvol=@/usr/important 0 0
Snapshots occupy disk space. To prevent disks from running out of space and thus causing system outages, old snapshots are automatically deleted. By default, the following snapshots are kept:
the first snapshot of the last ten days, months, and years
the last ten installation snapshot pairs marked as important
the last ten installation/administration snapshots
Refer to Section 3.4.1, “Managing Existing Configurations” for instructions on how to change these values.
Apart from snapshots on Btrfs
file systems, Snapper
also supports taking snapshots on thin-provisioned LVM volumes
(snapshots on regular LVM volumes are not
supported) formatted with XFS, Ext4 or Ext3. For more information and
setup instructions on LVM volumes, refer to
Section 5.2, “LVM Configuration”.
To use Snapper on a thin-provisioned LVM volume you need to
create a Snapper configuration for it. On LVM it is required to specify
the file system with
--fstype=lvm(FILESYSTEM)
.
ext3
, etx4
or
xfs
are valid values for
FILESYSTEM. Example:
snapper -c lvm create-config --fstype="lvm(xfs)" /thin_lvm
You can adjust this configuration according to your needs as described in Section 3.4.1, “Managing Existing Configurations”.
Snapper on openSUSE Leap is preconfigured to serve as a tool that lets
you undo changes made by zypper
and YaST. For
this purpose, Snapper is configured to create a pair of snapshots before
and after each run of zypper
and YaST. Snapper
also lets you restore system files that have been accidentally deleted or
modified. Timeline snapshots for the root partition need to be enabled
for this purpose—see
Section 3.1.3.1, “Disabling/Enabling Snapshots” for details.
By default, automatic snapshots as described above are configured for the
root partition and its subvolumes. To make snapshots available
for other partitions such as /home
for example, you
can create custom configurations.
When working with snapshots to restore data, it is important to know that there are two fundamentally different scenarios Snapper can handle:
When undoing changes as described in the following, two snapshots are being compared and the changes between these two snapshots are made undone. Using this method also allows to explicitly select the files that should be restored.
When doing rollbacks as described in Section 3.3, “System Rollback by Booting from Snapshots”, the system is reset to the state at which the snapshot was taken.
When undoing changes, it is also possible to compare a snapshot against the current system. When restoring all files from such a comparison, this will have the same result as doing a rollback. However, using the method described in Section 3.3, “System Rollback by Booting from Snapshots” for rollbacks should be preferred, since it is faster and allows you to review the system before doing the rollback.
There is no mechanism to ensure data consistency when creating a
snapshot. Whenever a file (for example, a database) is written at the
same time as the snapshot is being created, it will result in a broken
or partly written file. Restoring such a file will cause problems.
Furthermore, some system files such as /etc/mtab
must never be restored. Therefore it is strongly recommended to
always closely review the list of changed files and
their diffs. Only restore files that really belong to the action you
want to revert.
If you set up the root partition with Btrfs
during
the installation, Snapper—preconfigured for doing rollbacks of
YaST or Zypper changes—will automatically be installed.
Every time you start a YaST module or a Zypper transaction, two
snapshots are created: a “pre-snapshot” capturing the state
of the file system before the start of the module and a
“post-snapshot” after the module has been finished.
Using the YaST Snapper module or the snapper
command line tool, you can undo the changes made by YaST/Zypper by
restoring files from the “pre-snapshot”. Comparing two
snapshots the tools also allow you to see which files have been changed.
You can also display the differences between two versions of a file
(diff).
Start the yast2 snapper
.
Make sure
is set to . This is always the case unless you have manually added own Snapper configurations.
Choose a pair of pre- and post-snapshots from the list. Both,
YaST and Zypper snapshot pairs are of the type zypp(y2base)
in the ; Zypper snapshots are labeled
zypp(zypper)
.
Click
to open the list of files that differ between the two snapshots.Review the list of files. To display a “diff” between the pre- and post-version of a file, select it from the list.
To restore one or more files, select the relevant files or directories by activating the respective check box. Click
and confirm the action by clicking .To restore a single file, activate its diff view by clicking its name. Click
and confirm your choice with .snapper
Command #
Get a list of YaST and Zypper snapshots by running
snapper list
-t pre-post
.
YaST snapshots are labeled as yast
module_name
in the
; Zypper snapshots are labeled
zypp(zypper)
.
root #
snapper list -t pre-post
Pre # | Post # | Pre Date | Post Date | Description
------+--------+-------------------------------+-------------------------------+--------------
311 | 312 | Tue 06 May 2014 14:05:46 CEST | Tue 06 May 2014 14:05:52 CEST | zypp(y2base)
340 | 341 | Wed 07 May 2014 16:15:10 CEST | Wed 07 May 2014 16:15:16 CEST | zypp(zypper)
342 | 343 | Wed 07 May 2014 16:20:38 CEST | Wed 07 May 2014 16:20:42 CEST | zypp(y2base)
344 | 345 | Wed 07 May 2014 16:21:23 CEST | Wed 07 May 2014 16:21:24 CEST | zypp(zypper)
346 | 347 | Wed 07 May 2014 16:41:06 CEST | Wed 07 May 2014 16:41:10 CEST | zypp(y2base)
348 | 349 | Wed 07 May 2014 16:44:50 CEST | Wed 07 May 2014 16:44:53 CEST | zypp(y2base)
350 | 351 | Wed 07 May 2014 16:46:27 CEST | Wed 07 May 2014 16:46:38 CEST | zypp(y2base)
Get a list of changed files for a snapshot pair with snapper
status
PRE..POST. Files
with content changes are marked with , files that
have been added are marked with and deleted files
are marked with .
root #
snapper status 350..351
+..... /usr/share/doc/packages/mikachan-fonts
+..... /usr/share/doc/packages/mikachan-fonts/COPYING
+..... /usr/share/doc/packages/mikachan-fonts/dl.html
c..... /usr/share/fonts/truetype/fonts.dir
c..... /usr/share/fonts/truetype/fonts.scale
+..... /usr/share/fonts/truetype/みかちゃん-p.ttf
+..... /usr/share/fonts/truetype/みかちゃん-pb.ttf
+..... /usr/share/fonts/truetype/みかちゃん-ps.ttf
+..... /usr/share/fonts/truetype/みかちゃん.ttf
c..... /var/cache/fontconfig/7ef2298fde41cc6eeb7af42e48b7d293-x86_64.cache-4
c..... /var/lib/rpm/Basenames
c..... /var/lib/rpm/Dirnames
c..... /var/lib/rpm/Group
c..... /var/lib/rpm/Installtid
c..... /var/lib/rpm/Name
c..... /var/lib/rpm/Packages
c..... /var/lib/rpm/Providename
c..... /var/lib/rpm/Requirename
c..... /var/lib/rpm/Sha1header
c..... /var/lib/rpm/Sigmd5
To display the diff for a certain file, run snapper
diff
PRE..POST
FILENAME. If you do not specify
FILENAME, a diff for all files will be
displayed.
root #
snapper diff 350..351 /usr/share/fonts/truetype/fonts.scale
--- /.snapshots/350/snapshot/usr/share/fonts/truetype/fonts.scale 2014-04-23 15:58:57.000000000 +0200
+++ /.snapshots/351/snapshot/usr/share/fonts/truetype/fonts.scale 2014-05-07 16:46:31.000000000 +0200
@@ -1,4 +1,4 @@
-1174
+1486
ds=y:ai=0.2:luximr.ttf -b&h-luxi mono-bold-i-normal--0-0-0-0-c-0-iso10646-1
ds=y:ai=0.2:luximr.ttf -b&h-luxi mono-bold-i-normal--0-0-0-0-c-0-iso8859-1
[...]
To restore one or more files run snapper -v
undochange
PRE..POST
FILENAMES. If you do not specify a
FILENAMES, all changed files will be
restored.
root #
snapper -v undochange 350..351
create:0 modify:13 delete:7
undoing change...
deleting /usr/share/doc/packages/mikachan-fonts
deleting /usr/share/doc/packages/mikachan-fonts/COPYING
deleting /usr/share/doc/packages/mikachan-fonts/dl.html
deleting /usr/share/fonts/truetype/みかちゃん-p.ttf
deleting /usr/share/fonts/truetype/みかちゃん-pb.ttf
deleting /usr/share/fonts/truetype/みかちゃん-ps.ttf
deleting /usr/share/fonts/truetype/みかちゃん.ttf
modifying /usr/share/fonts/truetype/fonts.dir
modifying /usr/share/fonts/truetype/fonts.scale
modifying /var/cache/fontconfig/7ef2298fde41cc6eeb7af42e48b7d293-x86_64.cache-4
modifying /var/lib/rpm/Basenames
modifying /var/lib/rpm/Dirnames
modifying /var/lib/rpm/Group
modifying /var/lib/rpm/Installtid
modifying /var/lib/rpm/Name
modifying /var/lib/rpm/Packages
modifying /var/lib/rpm/Providename
modifying /var/lib/rpm/Requirename
modifying /var/lib/rpm/Sha1header
modifying /var/lib/rpm/Sigmd5
undoing change done
Reverting user additions via undoing changes with Snapper is not recommended. Since certain directories are excluded from snapshots, files belonging to these users will remain in the file system. If a user with the same user ID as a deleted user is created, this user will inherit the files. Therefore it is strongly recommended to use the YaST
tool to remove users.Apart from the installation and administration snapshots, Snapper creates timeline snapshots. You can use these backup snapshots to restore files that have accidentally been deleted or to restore a previous version of a file. By making use of Snapper's diff feature you can also find out which modifications have been made at a certain point of time.
Being able to restore files is especially interesting for data, which
may reside on subvolumes or partitions for which snapshots are not taken
by default. To be able to restore files from home directories, for
example, create a separate Snapper configuration for
/home
doing automatic timeline snapshots. See
Section 3.4, “Creating and Modifying Snapper Configurations” for instructions.
Snapshots taken from the root file system (defined by Snapper's root configuration), can be used to do a system rollback. The recommended way to do such a rollback is to boot from the snapshot and then perform the rollback. See Section 3.3, “System Rollback by Booting from Snapshots” for details.
Performing a rollback would also be possible by restoring all files
from a root file system snapshot as described below. However, this is
not recommended. You may restore single files, for example a
configuration file from the /etc
directory,
but not the complete list of files from the snapshot.
This restriction only affects snapshots taken from the root file system!
Start the yast2 snapper
.
Choose the
from which to choose a snapshot.Select a timeline snapshot from which to restore a file and choose
. Timeline snapshots are of the type with a description value of .Select a file from the text box by clicking the file name. The difference between the snapshot version and the current system is shown. Activate the check box to select the file for restore. Do so for all files you want to restore.
Click
and confirm the action by clicking .snapper
Command #Get a list of timeline snapshots for a specific configuration by running the following command:
snapper -c CONFIG list -t single | grep timeline
CONFIG needs to be replaced by an existing
Snapper configuration. Use snapper list-configs
to
display a list.
Get a list of changed files for a given snapshot by running the following command:
snapper -c CONFIG status SNAPSHOT_ID>..0
Replace SNAPSHOT_ID by the ID for the snapshot from which you want to restore the file(s).
Optionally list the differences between the current file version and the one from the snapshot by running
snapper -c CONFIG diff SNAPSHOT_ID..0 FILE NAME
If you do not specify <FILE NAME>, the difference for all files are shown.
To restore one or more files, run
snapper -c CONFIG -v undochange SNAPSHOT_ID..0 FILENAME1 FILENAME2
If you do not specify file names, all changed files will be restored.
The GRUB 2 version included on openSUSE Leap can boot from
Btrfs snapshots. Together with Snapper's rollback feature, this allows to
recover a misconfigured system. Only snapshots created for the default
Snapper configuration (root
) are bootable.
As of openSUSE Leap 42.1 system rollbacks are only supported by
SUSE if the default Snapper configuration (root
) and
the default configuration of the root partition have not been changed.
When booting a snapshot, the parts of the file system included in the snapshot are mounted read-only; all other file systems and parts that are excluded from snapshots are mounted read-write and can be modified.
When working with snapshots to restore data, it is important to know that there are two fundamentally different scenarios Snapper can handle:
When undoing changes as described in Section 3.2, “Using Snapper to Undo Changes”, two snapshots are compared and the changes between these two snapshots are reverted. Using this method also allows to explicitly exclude selected files from being restored.
When doing rollbacks as described in the following, the system is reset to the state at which the snapshot was taken.
To do a rollback from a bootable snapshot, the following requirements must be met. When doing a default installation, the system is set up accordingly.
The root file system needs to be Btrfs. Booting from LVM volume snapshots is not supported.
The root file system needs to be on a single device, a single partition
and a single subvolume. Directories that are excluded from snapshots
such as /srv
(see
Section 3.1.2, “Directories That Are Excluded from Snapshots” for a full list) may reside on
separate partitions.
The system needs to be bootable via the installed boot loader.
To perform a rollback from a bootable snapshot, do as follows:
Boot the system. In the boot menu choose
and select the snapshot you want to boot. The list of snapshots is listed by date—the most recent snapshot is listed first.Log in to the system. Carefully check whether everything works as expected. Note that you will not be able to write to any directory that is part of the snapshot. Data you write to other directories will not get lost, regardless of what you do next.
Depending on whether you want to perform the rollback or not, choose your next step:
If the system is in a state where you do not want to do a rollback, reboot to boot into the current system state, to choose a different snapshot, or to start the rescue system.
If you want to perform the rollback, run
sudo snapper rollback
and reboot afterwards. On the boot screen, choose the default boot entry to reboot into the reinstated system.
If snapshots are not disabled during installation, an initial bootable
snapshot is created at the end of the initial system installation. You can
go back to that state at any time by booting this snapshot. The snapshot
can be identified by the description after installation
.
A bootable snapshot is also created when starting a system upgrade to a service pack or a new major release (provided snapshots are not disabled).
To boot from a snapshot, reboot your machine and choose ↓ and ↑ to navigate and press Enter to activate the selected snapshot. Activating a snapshot from the boot menu does not reboot the machine immediately, but rather opens the boot loader of the selected snapshot.
. A screen listing all bootable snapshots opens. The most recent snapshot is listed first, the oldest last. Use the keysEach snapshot entry in the boot loader follows a naming scheme which makes it possible to identify it easily:
[*]1OS2 (KERNEL3,DATE4TTIME5,DESCRIPTION6)
If the snapshot was marked | |
Operating system label. | |
Date in the format | |
Time in the format | |
This field contains a description of the snapshot. In case of a manually
created snapshot this is the string created with the option
|
It is possible to replace the default string in the description field of a snapshot with a custom string. This is for example useful if an automatically created description is not sufficient, or a user-provided description is too long. To set a custom string STRING for snapshot NUMBER, use the following command:
snapper modify --userdata "bootloader=STRING" NUMBER
A complete system rollback, restoring the complete system to exactly the same state as it was in when a snapshot was taken, is not possible.
Root file system snapshots do not contain all directories. See Section 3.1.2, “Directories That Are Excluded from Snapshots” for details and reasons. As a general consequence, data from these directories is not restored, resulting in the following limitations.
Applications and add-ons installing data in subvolumes excluded from
the snapshot, such as /opt
, may not work after
a rollback, if others parts of the application data are also
installed on subvolumes included in the snapshot. Re-install the
application or the add-on to solve this problem.
If an application had changed file permissions and/or ownership in between snapshot and current system, the application may not be able to access these files. Reset permissions and/or ownership for the affected files after the rollback.
If a service or an application has established a new data format in between snapshot and current system, the application may not be able to read the affected data files after a rollback.
Subvolumes like /srv
may contain a mixture of
code and data. A rollback may result in non-functional code. A
downgrade of the PHP version, for example, may result in broken PHP
scripts for the Web server.
If a rollback removes users from the system, data that is owned by
these users in directories excluded from the snapshot, is not
removed. If a user with the same user ID is created, this user will
inherit the files. Use a tool like find
to locate
and remove orphaned files.
A rollback of the boot loader is not possible, since all “stages” of the boot loader must fit together. This cannot be guaranteed when doing rollbacks.
The way Snapper behaves is defined in a configuration file that is
specific for each partition or Btrfs
subvolume. These
configuration files reside under
/etc/snapper/configs/
. The default configuration
installed with Snapper for the /
directory is named
root
. It creates and manages the YaST and
Zypper snapshots plus the hourly backup snapshot for
/
.
You may create your own configurations for other partitions formatted
with Btrfs
or existing subvolumes on a
Btrfs
partition. In the following example we will set
up a Snapper configuration for backing up the Web server data residing on
a separate, Btrfs
-formatted partition mounted at
/srv/www
.
After a configuration has been created, you can either use
snapper
itself or the YaST
module to restore files from these snapshots.
In YaST you need to select your , while you need to specify your configuration for
snapper
with the global switch -c
(for example, snapper
-c myconfig
list).
To create a new Snapper configuration, run snapper
create-config
:
snapper -c www-data1 create-config /srv/www2
Name of configuration file. | |
Mount point of the partition or |
This command will create a new configuration file
/etc/snapper/configs/www-data
with reasonable
default values (taken from
/etc/snapper/config-templates/default
). Refer to
Section 3.4.1, “Managing Existing Configurations” for instructions on how to
adjust these defaults.
Default values for a new configuration are taken from
/etc/snapper/config-templates/default
. To use your
own set of defaults, create a copy of this file in the same directory
and adjust it to your needs. To use it, specify the -t
option with the create-config command:
snapper -c www-data create-config -t my_defaults /srv/www
The snapper
offers several subcommands for managing
existing configurations. You can list, show, delete and modify them:
Use the command snapper list-configs
to get all
existing configurations:
root #
snapper list-configs
Config | Subvolume
-------+----------
root | /
usr | /usr
local | /local
Use the subcommand snapper
-c
CONFIG
delete-config to delete a
configuration. Config needs to be replaced
by a configuration name shown by snapper
list-configs
.
Use the subcommand snapper
-c
CONFIG
get-config to display the
specified configuration. Config needs to
be replaced by a configuration name shown by snapper
list-configs
. See
Section 3.4.1.1, “Configuration Data” for more
information on the configuration options.
Use the subcommand snapper
-c
CONFIG
set-config
OPTION=VALUE to
modify an option in the specified configuration.
Config needs to be replaced by a
configuration name shown by snapper list-configs
.
Possible values for OPTION and
VALUE are listed in
Section 3.4.1.1, “Configuration Data”.
Each configuration contains a list of options that can be modified from the command line. The following list provides details for each option:
ALLOW_GROUPS
,
ALLOW_USERS
Granting permissions to use snapshots to regular users. See Section 3.4.1.2, “Using Snapper as Regular User” for more information.
The default value is ""
.
BACKGROUND_COMPARISON
Defines whether pre and post snapshots should be compared in the background after creation..
The default value is "yes"
.
If set to yes
, pre and post snapshot pairs that
do not differ will be deleted.
The default value is "yes"
.
Defines the minimum age in seconds a pre and post snapshot pair that does not differ must have before it can automatically be deleted.
The default value is "1800"
.
FSTYPE
File system type of the partition. Do not change.
The default value is "btrfs"
.
NUMBER_CLEANUP
Defines whether to automatically delete old installation and
administration snapshot pairs when the total snapshot count exceeds
a number specified with NUMBER_LIMIT
and an age specified with
NUMBER_MIN_AGE
. Valid values:
yes
, no
The default value is "no"
.
NUMBER_LIMIT
,
NUMBER_LIMIT_IMPORTANT
and
NUMBER_MIN_AGE
are always evaluated. Snapshots
are only deleted when all conditions are met.
If you always want to keep a certain number of snapshots regardless
of their age, set NUMBER_MIN_AGE
to
0
. On the other hand, if you do not want to keep
snapshots beyond a certain age, set NUMBER_LIMIT
and NUMBER_LIMIT_IMPORTANT
to
0
.
NUMBER_LIMIT
Defines how many installation and administration snapshot pairs that
are not marked as important to keep if
NUMBER_CLEANUP
is set to yes
.
Only the youngest snapshots will be kept.
The default value is "50"
.
NUMBER_LIMIT_IMPORTANT
Defines how many snapshot pairs marked as important to keep if
NUMBER_CLEANUP
is set to yes
.
Only the youngest snapshots will be kept.
The default value is "10"
.
NUMBER_MIN_AGE
Defines the minimum age in seconds a snapshot pair must have before it can automatically be deleted.
The default value is "1800"
.
SUBVOLUME
Mount point of the partition or subvolume to snapshot. Do not change.
SYNC_ACL
If Snapper is to be used by regular users (see
Section 3.4.1.2, “Using Snapper as Regular User”) the users must be able to
access the .snapshot
directories and to read
files within them. If SYNC_ACL is set to yes
,
Snapper automatically makes them accessible using ACLs for users and
groups from the ALLOW_USERS or ALLOW_GROUPS entries.
The default value is "no"
.
TIMELINE_CLEANUP
Defines whether to automatically delete old snapshots when the
snapshot count exceeds a number specified with the
TIMELINE_LIMIT_*
options and an age specified with
TIMELINE_MIN_AGE
. Valid values:
yes
, no
The default value is "no"
.
TIMELINE_CREATE
If set to yes
, hourly snapshots are created. This
is currently the only way to automatically create snapshots,
therefore setting it to yes
is strongly
recommended. Valid values: yes
,
no
The default value is "no"
.
TIMELINE_LIMIT_DAILY
,
TIMELINE_LIMIT_HOURLY
,
TIMELINE_LIMIT_MONTHLY
,
TIMELINE_LIMIT_YEARLY
Number of snapshots to keep for hour, day, month, year.
The default value for each entry is "10"
.
TIMELINE_CLEANUP="yes" TIMELINE_CREATE="yes" TIMELINE_LIMIT_DAILY="10" TIMELINE_LIMIT_HOURLY="10" TIMELINE_LIMIT_MONTHLY="10" TIMELINE_LIMIT_YEARLY="10" TIMELINE_MIN_AGE="1800"
This example configuration enables hourly snapshots which are
automatically cleaned up. TIMELINE_MIN_AGE
and
TIMELINE_LIMIT_*
are always evaluated both. In
this example, the minimum age of a snapshot, before it can be
deleted is set to 30 minutes (1800 seconds). Since we create hourly
snapshots, this ensures that only the latest snapshots are kept. If
TIMELINE_LIMIT_DAILY
is set to not zero, this
means that the first snapshot of the day is kept, too.
Hourly: The last ten snapshots that have been made.
Daily: The first daily snapshot that has been made is kept for the last ten days.
Monthly: The first snapshot made on the last day of the month is kept for the last ten months.
Yearly: The first snapshot made on the last day of the year is kept for the last ten years.
TIMELINE_MIN_AGE
Defines the minimum age in seconds a snapshot must have before it can automatically be deleted.
The default value is "1800"
.
By default Snapper can only be used by root
. However, there
are cases in which certain groups or users need to be able to create
snapshots or undo changes by reverting to a snapshot:
Web site administrators who want to take snapshots of
/srv/www
users who want to take a snapshot of their home directory
For these purposes Snapper configurations that grant permissions to
users or/and groups can be created. The corresponding
.snapshots
directory needs to be readable and
accessible by the specified users. The easiest way to achieve this is
to set the SYNC_ACL option to yes
.
Note that all steps in this procedure need to be run by root
.
If not existing, create a Snapper configuration for the partition or subvolume on which the user should be able to use Snapper. Refer to Section 3.4, “Creating and Modifying Snapper Configurations” for instructions. Example:
snapper --config web_data create /srv/www
The configuration file is created under
/etc/snapper/configs/CONFIG
,
where CONFIG is the value you specified with
-c/--config
in the previous step (for example
/etc/snapper/configs/web_data
). Adjust it
according to your needs; see
Section 3.4.1, “Managing Existing Configurations” for details.
Set values for ALLOW_USERS
and/or
ALLOW_GROUPS
to grant permissions to users and/or
groups, respectively. Multiple entries need to be separated by
Space. To grant permissions to the user
www_admin
for example, run:
snapper -c web_data set-config "ALLOW_USERS=www_admin" SYNC_ACL="yes"
The given Snapper configuration can now be used by the specified
user(s) and/or group(s). You can test it with the
list
command, for example:
www_admin:~ > snapper -c web_data list
Snapper is not restricted to creating and managing snapshots automatically by configuration; you can also create snapshot pairs (“before and after”) or single snapshots manually using either the command line tool or the YaST module.
All Snapper operations are carried out for an existing configuration (see
Section 3.4, “Creating and Modifying Snapper Configurations” for details). You can only take
snapshots of partitions or volumes for which a configuration exists. By
default the system configuration (root
) is used. If
you want to create or manage snapshots for your own configuration you
need to explicitly choose it. Use the drop-down box in YaST or specify the
-c
on the command line (snapper
-c
MYCONFIG
COMMAND).
Each snapshot consists of the snapshot itself and some metadata. When creating a snapshot you also need to specify the metadata. Modifying a snapshot means changing its metadata—you cannot modify its content. The following metadata is available for each snapshot:
Type: Snapshot type, see Section 3.5.1.1, “Snapshot Types” for details. This data cannot be changed.
Number: Unique number of the snapshot. This data cannot be changed.
Pre Number: Specifies the number of the corresponding pre snapshot. For snapshots of type post only. This data cannot be changed.
Description: A description of the snapshot.
Userdata: An extended description
where you can specify custom data in the form of a comma-separated
key=value list: reason=testing, project=foo
. This
field is also used to mark a snapshot as important
(important=yes
) and to list the user that created
the snapshot (user=tux).
Cleanup-Algorithm: Cleanup-algorithm for the snapshot, see Section 3.5.1.2, “Cleanup-algorithms” for details.
Snapper knows three different types of snapshots: pre, post, and single. Physically they do not differ, but Snapper handles them differently.
pre
Snapshot of a file system before a
modification. Each pre
snapshot has got a
corresponding post
snapshot. Used for the
automatic YaST/Zypper snapshots, for example.
post
Snapshot of a file system after a modification.
Each post
snapshot has got a corresponding
pre
snapshot. Used for the automatic
YaST/Zypper snapshots, for example.
single
Stand-alone snapshot. Used for the automatic hourly snapshots, for example. This is the default type when creating snapshots.
Snapper provides three algorithms to clean up old snapshots. The algorithms are executed in a daily cron-job. It is possible to define the number of the different types of snapshots to keep in the Snapper configuration (see Section 3.4.1, “Managing Existing Configurations” for details).
Deletes old snapshots when a certain snapshot count is reached.
Deletes old snapshots having passed a certain age, but keeps a number of hourly, daily, monthly, and yearly snapshots.
Deletes pre/post snapshot pairs with empty diffs.
Creating a snapshot is done by running snapper create
or by clicking in the YaST module
. The following examples explain how to create
snapshots from the command line. It should be easy to adopt them when
using the YaST interface.
You should always specify a meaningful description to later be able to identify its purpose. Even more information can be specified via the user data option.
snapper create
--description "Snapshot for week 2
2014"
Creates a stand-alone snapshot (type single) for the default
(root
) configuration with a description. Because
no cleanup-algorithm is specified, the snapshot will never be deleted
automatically.
snapper
--config home
create
--description "Cleanup in ~tux"
Creates a stand-alone snapshot (type single) for a custom
configuration named home
with a description.
Because no cleanup-algorithm is specified, the snapshot will never be
deleted automatically.
snapper
--config home
create
--description "Daily data backup" --cleanup-algorithm
timeline
Creates a stand-alone snapshot (type single) for a custom
configuration named home
with a description. The
file will automatically be deleted when it meets the criteria
specified for the timeline cleanup-algorithm in the configuration.
snapper create
--type pre
--print-number
--description "Before the Apache
config cleanup"
--userdata "important=yes"
Creates a snapshot of the type pre
and prints the
snapshot number. First command needed to create a pair of snapshots
used to save a “before” and “after” state.
The snapshot is marked as important.
snapper create
--type post
--pre-number 30
--description "After the Apache
config cleanup"
--userdata "important=yes"
Creates a snapshot of the type post
paired with
the pre
snapshot number 30
.
Second command needed to create a pair of snapshots used to save a
“before” and “after” state. The snapshot is
marked as important.
snapper create
--command
COMMAND
--description
"Before and after COMMAND"
Automatically creates a snapshot pair before and after running COMMAND. This option is only available when using snapper on the command line.
Snapper allows you to modify the description, the cleanup algorithm, and the userdata of a snapshot. All other metadata cannot be changed. The following examples explain how to modify snapshots from the command line. It should be easy to adopt them when using the YaST interface.
To modify a snapshot on the command line, you need to know its number.
Use snapper
list
to display all
snapshots and their numbers.
The YaST
module already lists all snapshots. Choose one from the list and click .snapper modify
--cleanup-algorithm "timeline"
10
Modifies the metadata of snapshot 10 for the default
(root
) configuration. The cleanup algorithm is set
to timeline
.
snapper
--config home
modify
--description "daily backup" -cleanup-algorithm
"timeline"
120
Modifies the metadata of snapshot 120 for a custom configuration
named home
. A new description is set and the
cleanup algorithm is unset.
To delete a snapshot with the YaST
module, choose a snapshot from the list and click .
To delete a snapshot with the command line tool, you need to know its
number. Get it by running snapper list
. To delete a
snapshot, run snapper delete
NUMBER.
When deleting snapshots with Snapper, the freed space will be claimed by a
Btrfs process running in the background. Thus the visibility and the
availability of free space is delayed. In case you need space freed by
deleting a snapshot to be available immediately, use the option
--sync
with the delete command.
When deleting a pre
snapshot, you should always
delete its corresponding post
snapshot (and vice
versa).
snapper delete 65
Deletes snapshot 65 for the default (root
)
configuration.
snapper
-c home
delete 89 90
Deletes snapshots 89 and 90 for a custom configuration named
home
.
snapper
--sync
23
Deletes snapshot 23 for the default (root
)
configuration and makes the freed space available immediately.
Sometimes the Btrfs snapshot is present but the XML file containing the metadata for Snapper is missing. In this case the snapshot is not visible for Snapper and needs to be deleted manually:
btrfs subvolume delete /.snapshots/SNAPSHOTNUMBER/snapshot rm -rf /.snapshots/SNAPSHOTNUMBER
If you delete snapshots to free space on your hard disk, make sure to delete old snapshots first. The older a snapshot is, the more disk space it occupies.
Snapshots are also automatically deleted by a daily cron-job. Refer to Section 3.5.1.2, “Cleanup-algorithms” for details.
/var/log
,
/tmp
and Other Directories?
For some directories we decided to exclude them from snapshots. See Section 3.1.2, “Directories That Are Excluded from Snapshots” for a list and reasons. To exclude a path from snapshots we create a subvolume for that path.
Since the df
does not show the correct disk usage
on Btrfs
file systems, you need to use the command
btrfs filesystem df
MOUNT_POINT. Displaying the amount of disk
space a snapshot allocates is currently not supported by the
Btrfs
tools.
To free space on a Btrfs
partition
containing snapshots you need to delete unneeded snapshots rather than
files. Older snapshots occupy more space than recent ones. See
Section 3.1.3.4, “Controlling Snapshot Archiving” for details.
Doing an upgrade from one service pack to another results in snapshots occupying a lot of disk space on the system subvolumes, because a lot of data gets changed (package updates). Manually deleting these snapshots after they are no longer needed is recommended. See Section 3.5.4, “Deleting Snapshots” for details.
Yes—refer to Section 3.3, “System Rollback by Booting from Snapshots” for details.
See the Snapper home page at http://snapper.io/.
Virtual Network Computing (VNC) enables you to control a remote computer via a graphical desktop (as opposed to a remote shell access). VNC is platform-independent and lets you access the remote machine from any operating system.
openSUSE Leap supports two different kinds of VNC sessions: One-time sessions that “live” as long as the VNC connection from the client is kept up, and persistent sessions that “live” until they are explicitly terminated.
A machine can offer both kinds of sessions simultaneously on different ports, but an open session cannot be converted from one type to the other.
A one-time session is initiated by the remote client. It starts a graphical login screen on the server. This way you can choose the user which starts the session and, if supported by the login manager, the desktop environment. Once you terminate the client connection to such a VNC session, all applications started within that session will be terminated, too. One-time VNC sessions cannot be shared, but it is possible to have multiple sessions on a single host at the same time.
Start
› › .Check
.If necessary, also check
(for example, when your network interface is configured to be in the External Zone). If you have more than one network interface, restrict opening the firewall ports to a specific interface via .Confirm your settings with
.In case not all needed packages are available yet, you need to approve the installation of missing packages.
The default configuration on openSUSE Leap serves sessions with a
resolution of 1024x768 pixels at a color depth of 16-bit. The sessions
are available on ports 5901
for “regular” VNC viewers (equivalent to VNC display
1
) and on port
5801
for Web browsers.
Other configurations can be made available on different ports, see Section 4.1.3, “Configuring One-time VNC Sessions”.
VNC display numbers and X display numbers are independent in one-time sessions. A VNC display number is manually assigned to every configuration that the server supports (:1 in the example above). Whenever a VNC session is initiated with one of the configurations, it automatically gets a free X display number.
By default, both the VNC client and server try to communicate securely via a self-signed SSL certificate, which is generated after installation. You can either use the default one, or replace it with your own. When using the self-signed certificate, you need to confirm its signature before the first connection—both in the VNC viewer and the Web browser. The Java client is served over HTTPS, using the same certificate as VNC.
To initiate a one-time VNC session, a VNC viewer must be installed on
the client machine. The standard viewer on SUSE Linux products is
vncviewer
, provided by the package
tigervnc
. You may also view a
VNC session using your Web browser and a Java applet.
To start your VNC viewer and initiate a session with the server's default configuration, use the command:
vncviewer jupiter.example.com:1
Instead of the VNC display number you can also specify the port number with two colons:
vncviewer jupiter.example.com::5901
Alternatively use a Java-capable Web browser to view the VNC session by
entering the following URL: http://jupiter.example.com:5801
You can skip this section, if you do not need or want to modify the default configuration.
One-time VNC sessions are started via the
xinetd
daemon. A configuration
file is located at /etc/xinetd.d/vnc
. By default it
offers six configuration blocks: three for VNC viewers
(vnc1
to vnc3
), and three serving
a Java applet (vnchttpd1
to
vnchttpd3
). By default only vnc1
and vnchttpd1
are active.
To activate a configuration, comment the line disable =
yes
with a #
character in the first column,
or remove that line completely. To deactivate a configuration uncomment
or add that line.
The Xvnc
server can be configured via the
server_args
option—see Xnvc
--help
for a list of options.
When adding custom configurations, make sure they are not using ports that are already in use by other configurations, other services, or existing persistent VNC sessions on the same host.
Activate configuration changes by entering the following command:
sudo rcxinetd reload
When activating Remote Administration as described in
Procedure 4.1, “Enabling One-time VNC Sessions”, the ports
5801
and
5901
are opened in the
firewall. If the network interface serving the VNC sessions is
protected by a firewall, you need to manually open the respective ports
when activating additional ports for VNC sessions. See
Book “Security Guide”, Chapter 15 “Masquerading and Firewalls” for instructions.
A persistent VNC session is initiated on the server. The session and all applications started in this session run regardless of client connections until the session is terminated.
A persistent session can be accessed from multiple clients simultaneously. This is ideal for demonstration purposes or for trainings where the trainer might need access to the trainee's desktop. However, most of the times you probably do not want to share your VNC session.
In contrast to one-time sessions that start a display manager, a persistent session starts a ready-to-operate desktop that runs as the user that started the VNC session. Access to persistent sessions is protected by a password.
Open a shell and make sure you are logged in as the user that should own the VNC session.
If the network interface serving the VNC sessions is protected by a firewall, you need to manually open the port used by your session in the firewall. If starting multiple sessions you may alternatively open a range of ports. See Book “Security Guide”, Chapter 15 “Masquerading and Firewalls” for details on how to configure the firewall.
vncserver
uses the ports
5901
for display
:1
, 5902
for display :2
, and so on. For persistent sessions,
the VNC display and the X display usually have the same number.
To start a session with a resolution of 1024x769 pixel and with a color depth of 16-bit, enter the following command:
vncserver -geometry 1024x768 -depth 16
The vncserver
command picks an unused display number
when none is given and prints its choice. See man 1
vncserver
for more options.
When running vncviewer
for the first time, it asks for
a password for full access to the session.
The password you are providing here is also used for future sessions
started by the same user. It can be changed with the
vncpasswd
command.
Make sure to use strong passwords of significant length (eight or more characters). Do not share these passwords.
VNC connections are unencrypted, so people who can sniff the network(s) between the two machines can read the password when it gets transferred at the beginning of a session.
To terminate the session shut down the desktop environment that runs inside the VNC session from the VNC viewer as you would shut it down if it was a regular local X session.
If you prefer to manually terminate a session, open a shell on the VNC
server and make sure you are logged in as the user that owns the VNC
session you want to terminate. Run the following command to terminate the
session that runs on display :1
: vncserver
-kill :1
To connect to a persistent VNC session, a VNC viewer must be installed.
The standard viewer on SUSE Linux products is
vncviewer
, provided by the package tigervnc
(default). You may also view a VNC
session using your Web browser and a Java applet.
To start your VNC viewer and connect to display :1
of
the VNC server, use the command
vncviewer jupiter.example.com:1
Instead of the VNC display number you can also specify the port number with two colons:
vncviewer jupiter.example.com::5901
Alternatively use a Java-capable Web browser to view the VNC session by
entering the following URL: http://jupiter.example.com:5801
Persistent VNC sessions can be configured by editing
$HOME/.vnc/xstartup
. By default this shell script
starts the same GUI/window manager it was started from. In openSUSE Leap
this will either be GNOME or IceWM. If you want to start your session
with a window manager of your choice, set the variable
WINDOWMANAGER
:
WINDOWMANAGER=gnome vncserver -geometry 1024x768 WINDOWMANAGER=icewm vncserver -geometry 1024x768
Persistent VNC sessions are configured in a single per-user configuration. Multiple sessions started by the same user will all use the same start-up and password files.
Sophisticated system configurations require specific disk setups. All
common partitioning tasks can be done with YaST. To get persistent
device naming with block devices, use the block devices below
/dev/disk/by-id
or
/dev/disk/by-uuid
. Logical Volume Management (LVM) is
a disk partitioning scheme that is designed to be much more flexible than
the physical partitioning used in standard setups. Its snapshot
functionality enables easy creation of data backups. Redundant Array of
Independent Disks (RAID) offers increased data integrity, performance, and
fault tolerance. openSUSE Leap also supports multipath I/O
, and there is also the option to use iSCSI as a
networked disk.
With the expert partitioner, shown in Figure 5.1, “The YaST Partitioner”, manually modify the partitioning of one or several hard disks. You can add, delete, resize, and edit partitions, or access the soft RAID, and LVM configuration.
Although it is possible to repartition your system while it is running, the risk of making a mistake that causes data loss is very high. Try to avoid repartitioning your installed system and always do a complete backup of your data before attempting to do so.
All existing or suggested partitions on all connected hard disks are
displayed in the list of /dev/sda
. Partitions are listed as parts
of these devices, such as
/dev/sda1
. The size, type,
encryption status, file system, and mount point of the hard disks and
their partitions are also displayed. The mount point describes where the
partition appears in the Linux file system tree.
Several functional views are available on the left hand RAID
, Volume Management
,
Crypt Files
, or view file systems with additional
features, such as Btrfs, NFS, or TMPFS
.
If you run the expert dialog during installation, any free hard disk space is also listed and automatically selected. To provide more disk space to openSUSE® Leap, free the needed space starting from the bottom toward the top of the list (starting from the last partition of a hard disk toward the first).
Every hard disk has a partition table with space for four entries. Every entry in the partition table corresponds to a primary partition or an extended partition. Only one extended partition entry is allowed, however.
A primary partition simply consists of a continuous range of cylinders (physical disk areas) assigned to a particular operating system. With primary partitions you would be limited to four partitions per hard disk, because more do not fit in the partition table. This is why extended partitions are used. Extended partitions are also continuous ranges of disk cylinders, but an extended partition may be divided into logical partitions itself. Logical partitions do not require entries in the partition table. In other words, an extended partition is a container for logical partitions.
If you need more than four partitions, create an extended partition as the fourth partition (or earlier). This extended partition should occupy the entire remaining free cylinder range. Then create multiple logical partitions within the extended partition. The maximum number of logical partitions is 63, independent of the disk type. It does not matter which types of partitions are used for Linux. Primary and logical partitions both function normally.
If you need to create more than 4 primary partitions on one hard disk, you need to use the GPT partition type. This type removes the primary partitions number restriction, and supports partitions bigger than 2 TB as well.
To use GPT, run the YaST Partitioner, click the relevant disk name in the
and choose › › .To create a partition from scratch select
and then a hard disk with free space. The actual modification can be done in the tab:Select Section 5.1.1, “Partition Types”).
and specify the partition type (primary or extended). Create up to four primary partitions or up to three primary partitions and one extended partition. Within the extended partition, create several logical partitions (seeSpecify the size of the new partition. You can either choose to occupy all the free unpartitioned space, or enter a custom size.
Select the file system to use and a mount point. YaST suggests a
mount point for each partition created. To use a different mount
method, like mount by label, select root
.
Specify additional file system options if your setup requires them. This is necessary, for example, if you need persistent device names. For details on the available options, refer to Section 5.1.3, “Editing a Partition”.
Click
to apply your partitioning setup and leave the partitioning module.If you created the partition during installation, you are returned to the installation overview screen.
The default file system for the root partition is Btrfs (see Chapter 3, System Recovery and Snapshot Management with Snapper for more information on Btrfs). The root file system is the default subvolume and it is not listed in the list of created subvolumes. As a default Btrfs subvolume, it can be mounted as a normal file system.
The default partitioning setup suggests the root partition as Btrfs
with /boot
being a directory. If you need to have the
root partition encrypted in this setup, make sure to use the GPT
partition table type instead of the default MSDOS type. Otherwise
the GRUB2 boot loader may not have enough space for the second stage loader.
It is possible to create snapshots of Btrfs subvolumes—either
manually, or automatically based on system events. For example when
making changes to the file system, zypper
invokes the
snapper
command to create snapshots before and after
the change. This is useful if you are not satisfied with the change
zypper
made and want to restore the previous state.
As snapper
invoked by zypper
snapshots the root file system by default, it is
reasonable to exclude specific directories from being snapshot,
depending on the nature of data they hold. And that is why YaST
suggests creating the following separate subvolumes.
/tmp /var/tmp /var/run
Directories with frequently changed content.
/var/spool
Contains user data, such as mails.
/var/lib
Holds dynamic data libraries and files plus state information pertaining to an application or the system.
By default, subvolumes with the option no copy on
write
are created for: /var/lib/mariadb
,
/var/lib/pgsql
, and
/var/lib/libvirt/images
.
/var/log
Contains system and applications' log files which should never be rolled back.
/var/crash
Contains memory dumps of crashed kernels.
/srv
Contains data files belonging to FTP and HTTP servers.
/opt
Contains third party software.
Because saved snapshots require more disk space, it is recommended to reserve more space for Btrfs partition than for a partition not capable of snapshotting (such as Ext3). Recommended size for a root Btrfs partition with suggested subvolumes is 20GB.
Subvolumes of a Btrfs partition can be now managed with the YaST
module. You can add new or remove existing subvolumes.Start the YaST
with › .Choose
in the left pane.Select the Btrfs partition whose subvolumes you need to manage and click
.
Click @/.snapshots/xyz/snapshot
entries—each of these subvolumes belongs to one existing
snapshot.
Depending on whether you want to add or remove subvolumes, do the following:
To remove a subvolume, select it from the list of
and click .To add a new subvolume, enter its name to the
text box and click .Confirm with
and .Leave the partitioner with
.When you create a new partition or modify an existing partition, you can set various parameters. For new partitions, the default parameters set by YaST are usually sufficient and do not require any modification. To edit your partition setup manually, proceed as follows:
Select the partition.
Click
to edit the partition and set the parameters:Even if you do not want to format the partition at this stage, assign it a file system ID to ensure that the partition is registered correctly. Typical values are , , , and .
To change the partition file system, click and select file system type in the list.
openSUSE Leap supports several types of file systems. Btrfs is the Linux file system of choice for the root partition because of its advanced features. It supports copy-on-write functionality, creating snapshots, multi-device spanning, subvolumes, and other useful techniques. XFS, Ext3 and JFS are journaling file systems. These file systems can restore the system very quickly after a system crash, using write processes logged during the operation. Ext2 is not a journaling file system, but it is adequate for smaller partitions because it does not require much disk space for management.
The default file system for the root partition is Btrfs. The default file system for additional partitions is XFS.
Swap is a special format that allows the partition to be used as a virtual memory. Create a swap partition of at least 256 MB. However, if you use up your swap space, consider adding more memory to your system instead of adding more swap space.
Changing the file system and reformatting partitions irreversibly deletes all data from the partition.
For details on the various file systems, refer to Storage Administration Guide.
If you activate the encryption, all data is written to the hard disk in encrypted form. This increases the security of sensitive data, but reduces the system speed, as the encryption takes some time to process. More information about the encryption of file systems is provided in Book “Security Guide”, Chapter 11 “Encrypting Partitions and Files”.
Specify the directory where the partition should be mounted in the file system tree. Select from YaST suggestions or enter any other name.
Specify various parameters contained in the global file system
administration file (/etc/fstab
). The default
settings should suffice for most setups. You can, for example,
change the file system identification from the device name to a
volume label. In the volume label, use all characters except
/
and space.
To get persistent devices names, use the mount option openSUSE Leap, persistent device names are enabled by default.
, or . In
If you prefer to mount the partition by its label, you need to
define one in the HOME
for a partition intended to mount to /home
.
If you intend to use quotas on the file system, use the mount option Book “Start-Up”, Chapter 3 “Managing Users with YaST”, Section 3.3.4 “Managing Quotas”.
. This must be done before you can define quotas for users in the YaST module. For further information on how to configure user quota, refer toSelect
to save the changes.To resize an existing file system, select the partition and use
. Note, that it is not possible to resize partitions while mounted. To resize partitions, unmount the relevant partition before running the partitioner.After you select a hard disk device (like
) in the pane, you can access the menu in the lower right part of the window. The menu contains the following commands:This option helps you create a new partition table on the selected device.
Creating a new partition table on a device irreversibly removes all the partitions and their data from that device.
This option helps you clone the device partition layout (but not the data) to other available disk devices.
After you select the host name of the computer (the top-level of the tree in the
pane), you can access the menu in the lower right part of the window. The menu contains the following commands:To access SCSI over IP block devices, you first need to configure iSCSI. This results in additionally available devices in the main partition list.
Selecting this option helps you configure the multipath enhancement to the supported mass storage devices.
The following section includes a few hints and tips on partitioning that should help you make the right decisions when setting up your system.
Note, that different partitioning tools may start counting the cylinders
of a partition with 0
or with 1
.
When calculating the number of cylinders, you should always use the
difference between the last and the first cylinder number and add one.
swap
#Swap is used to extend the available physical memory. It is then possible to use more memory than physical RAM available. The memory management system of kernels before 2.4.10 needed swap as a safety measure. Then, if you did not have twice the size of your RAM in swap, the performance of the system suffered. These limitations no longer exist.
Linux uses a page called “Least Recently Used” (LRU) to select pages that might be moved from memory to disk. Therefore, running applications have more memory available and caching works more smoothly.
If an application tries to allocate the maximum allowed memory, problems with swap can arise. There are three major scenarios to look at:
The application gets the maximum allowed memory. All caches are freed, and thus all other running applications are slowed. After a few minutes, the kernel's out-of-memory kill mechanism activates and kills the process.
At first, the system slows like a system without swap. After all physical RAM has been allocated, swap space is used as well. At this point, the system becomes very slow and it becomes impossible to run commands from remote. Depending on the speed of the hard disks that run the swap space, the system stays in this condition for about 10 to 15 minutes until the out-of-memory kill mechanism resolves the issue. Note that you will need a certain amount of swap if the computer needs to perform a “suspend to disk”. In that case, the swap size should be large enough to contain the necessary data from memory (512 MB–1GB).
It is better to not have an application that is out of control and swapping excessively in this case. If you use such application, the system will need many hours to recover. In the process, it is likely that other processes get timeouts and faults, leaving the system in an undefined state, even after terminating the faulty process. In this case, do a hard machine reboot and try to get it running again. Lots of swap is only useful if you have an application that relies on this feature. Such applications (like databases or graphics manipulation programs) often have an option to directly use hard disk space for their needs. It is advisable to use this option instead of using lots of swap space.
If your system is not out of control, but needs more swap after some time, it is possible to extend the swap space online. If you prepared a partition for swap space, add this partition with YaST. If you do not have a partition available, you can also use a swap file to extend the swap. Swap files are generally slower than partitions, but compared to physical RAM, both are extremely slow so the actual difference is negligible.
To add a swap file in the running system, proceed as follows:
Create an empty file in your system. For example, if you want to add a
swap file with 128 MB swap at
/var/lib/swap/swapfile
, use the commands:
mkdir -p /var/lib/swap dd if=/dev/zero of=/var/lib/swap/swapfile bs=1M count=128
Initialize this swap file with the command
mkswap /var/lib/swap/swapfile
mkswap
Do not reformat existing swap partitions with mkswap
if possible. Reformatting with mkswap
will change
the UUID value of the swap partition. Either reformat via YaST (will
update /etc/fstab
) or adjust
/etc/fstab
manually.
Activate the swap with the command
swapon /var/lib/swap/swapfile
To disable this swap file, use the command
swapoff /var/lib/swap/swapfile
Check the current available swap spaces with the command
cat /proc/swaps
Note that at this point, it is only temporary swap space. After the next reboot, it is no longer used.
To enable this swap file permanently, add the following line to
/etc/fstab
:
/var/lib/swap/swapfile swap swap defaults 0 0
From the
, access the LVM configuration by clicking the item in the pane. However, if a working LVM configuration already exists on your system, it is automatically activated upon entering the initial LVM configuration of a session. In this case, all disks containing a partition (belonging to an activated volume group) cannot be repartitioned. The Linux kernel cannot reread the modified partition table of a hard disk when any partition on this disk is in use. If you already have a working LVM configuration on your system, physical repartitioning should not be necessary. Instead, change the configuration of the logical volumes.
At the beginning of the physical volumes (PVs), information about the
volume is written to the partition. To reuse such a partition for other
non-LVM purposes, it is advisable to delete the beginning of this volume.
For example, in the VG system
and PV
/dev/sda2
, do this with the command
dd
if=/dev/zero of=/dev/sda2 bs=512
count=1
.
The file system used for booting (the root file system or
/boot
) must not be stored on an LVM logical volume.
Instead, store it on a normal physical partition.
In case you want to change your /usr
or
swap
, refer to Procedure 9.1, “Updating Init RAM Disk When Switching to Logical Volumes”.
This section briefly describes the principles behind the Logical Volume Manager (LVM) and its multipurpose features. In Section 5.2.2, “LVM Configuration with YaST”, learn how to set up LVM with YaST.
Using LVM is sometimes associated with increased risk such as data loss. Risks also include application crashes, power failures, and faulty commands. Save your data before implementing LVM or reconfiguring volumes. Never work without a backup.
The LVM enables flexible distribution of hard disk space over several file systems. It was developed because sometimes the need to change the segmenting of hard disk space arises just after the initial partitioning has been done. Because it is difficult to modify partitions on a running system, LVM provides a virtual pool (volume group, VG for short) of memory space from which logical volumes (LVs) can be created as needed. The operating system accesses these LVs instead of the physical partitions. Volume groups can occupy more than one disk, so that several disks or parts of them may constitute one single VG. This way, LVM provides a kind of abstraction from the physical disk space that allows its segmentation to be changed in a much easier and safer way than with physical repartitioning. Background information regarding physical partitioning can be found in Section 5.1.1, “Partition Types” and Section 5.1, “Using the YaST Partitioner”.
Figure 5.3, “Physical Partitioning versus LVM” compares physical partitioning (left) with LVM segmentation (right). On the left side, one single disk has been divided into three physical partitions (PART), each with a mount point (MP) assigned so that the operating system can gain access. On the right side, two disks have been divided into two and three physical partitions each. Two LVM volume groups (VG 1 and VG 2) have been defined. VG 1 contains two partitions from DISK 1 and one from DISK 2. VG 2 contains the remaining two partitions from DISK 2. In LVM, the physical disk partitions that are incorporated in a volume group are called physical volumes (PVs). Within the volume groups, four LVs (LV 1 through LV 4) have been defined. They can be used by the operating system via the associated mount points. The border between different LVs do not need to be aligned with any partition border. See the border between LV 1 and LV 2 in this example.
LVM features:
Several hard disks or partitions can be combined in a large logical volume.
Provided the configuration is suitable, an LV (such as
/usr
) can be enlarged if free space is exhausted.
With LVM, it is possible to add hard disks or LVs in a running system. However, this requires hotpluggable hardware.
It is possible to activate a "striping mode" that distributes the data stream of an LV over several PVs. If these PVs reside on different disks, the read and write performance is enhanced, as with RAID 0.
The snapshot feature enables consistent backups (especially for servers) of the running system.
With these features, LVM is ready for heavily used home PCs or small servers. LVM is well-suited for the user with a growing data stock (as in the case of databases, music archives, or user directories). This would allow file systems that are larger than the physical hard disk. Another advantage of LVM is that up to 256 LVs can be added. However, working with LVM is different from working with conventional partitions. Instructions and further information about configuring LVM is available in the official LVM HOWTO at http://tldp.org/HOWTO/LVM-HOWTO/.
Starting from Kernel version 2.6, LVM version 2 is available, which is backward-compatible with the previous LVM and enables the continued management of old volume groups. When creating new volume groups, decide whether to use the new format or the backward-compatible version. LVM 2 does not require any kernel patches. It uses the device mapper integrated in kernel 2.6. This kernel only supports LVM version 2. Therefore, when talking about LVM, this section always refers to LVM version 2.
Starting from Kernel version 3.4, LVM supports thin provisioning. A thin-provisioned volume has a virtual capacity and a real capacity. Virtual capacity is the volume storage capacity that is available to a host. Real capacity is the storage capacity that is allocated to a volume copy from a storage pool. In a fully allocated volume, the virtual capacity and real capacity are the same. In a thin-provisioned volume, however, the virtual capacity can be much larger than the real capacity. If a thin-provisioned volume does not have enough real capacity for a write operation, the volume is taken offline and an error is logged.
For more general information, see http://wikibon.org/wiki/v/Thin_provisioning.
The YaST LVM configuration can be reached from the YaST Expert Partitioner (see Section 5.1, “Using the YaST Partitioner”) within the item in the pane. The Expert Partitioner allows you to edit and delete existing partitions and also create new ones that need to be used with LVM. The first task is to create PVs that provide space to a volume group:
Select a hard disk from
.Change to the
tab.Click
and enter the desired size of the PV on this disk.Use
and change the to . Do not mount this partition.Repeat this procedure until you have defined all the desired physical volumes on the available disks.
If no volume group exists on your system, you must add one (see Figure 5.4, “Creating a Volume Group”). It is possible to create additional groups by clicking in the pane, and then on . One single volume group is usually sufficient.
Enter a name for the VG, for example, system
.
Select the desired
. This value defines the size of a physical block in the volume group. All the disk space in a volume group is handled in blocks of this size.Add the prepared PVs to the VG by selecting the device and clicking Ctrl while selecting the devices.
. Selecting several devices is possible by holdingSelect
to make the VG available to further configuration steps.If you have multiple volume groups defined and want to add or remove PVs, select the volume group in the
list and click . In the following window, you can add or remove PVs to the selected volume group.After the volume group has been filled with PVs, define the LVs which the operating system should use in the next dialog. Choose the current volume group and change to the
tab. , , , and LVs as needed until all space in the volume group has been occupied. Assign at least one LV to each volume group.Click
and go through the wizard-like pop-up that opens:
Enter the name of the LV. For a partition that should be mounted to
/home
, a name like HOME
could
be used.
Select the type of the LV. It can be either
, , or . Note that you need to create a thin pool first, which can store individual thin volumes. The big advantage of thin provisioning is that the total sum of all thin volumes stored in a thin pool can exceed the size of the pool itself.Select the size and the number of stripes of the LV. If you have only one PV, selecting more than one stripe is not useful.
Choose the file system to use on the LV and the mount point.
By using stripes it is possible to distribute the data stream in the LV among several PVs (striping). However, striping a volume can only be done over different PVs, each providing at least the amount of space of the volume. The maximum number of stripes equals to the number of PVs, where Stripe "1" means "no striping". Striping only makes sense with PVs on different hard disks, otherwise performance will decrease.
YaST cannot, at this point, verify the correctness of your entries concerning striping. Any mistake made here is apparent only later when the LVM is implemented on disk.
If you have already configured LVM on your system, the existing logical volumes can also be used. Before continuing, assign appropriate mount points to these LVs. With
, return to the YaST Expert Partitioner and finish your work there.The purpose of RAID (redundant array of independent disks) is to combine several hard disk partitions into one large virtual hard disk to optimize performance and/or data security. Most RAID controllers use the SCSI protocol because it can address a larger number of hard disks in a more effective way than the IDE protocol. It is also more suitable for the parallel command processing. There are some RAID controllers that support IDE or SATA hard disks. Soft RAID provides the advantages of RAID systems without the additional cost of hardware RAID controllers. However, this requires some CPU time and has memory requirements that make it unsuitable for high performance computers.
With openSUSE® Leap , you can combine several hard disks into one soft RAID system. RAID implies several strategies for combining several hard disks in a RAID system, each with different goals, advantages, and characteristics. These variations are commonly known as RAID levels.
Common RAID levels are:
This level improves the performance of your data access by spreading out blocks of each file across multiple disk drives. Actually, this is not really a RAID, because it does not provide data backup, but the name RAID 0 for this type of system is commonly used. With RAID 0, two or more hard disks are pooled together. Performance is enhanced, but the RAID system is destroyed and your data lost if even one hard disk fails.
This level provides adequate security for your data, because the data is copied to another hard disk 1:1. This is known as hard disk mirroring. If one disk is destroyed, a copy of its contents is available on the other one. All disks but one could be damaged without endangering your data. However, if the damage is not detected, the damaged data can be mirrored to the undamaged disk. This could result in the same loss of data. The writing performance suffers in the copying process compared to using single disk access (10 to 20 % slower), but read access is significantly faster in comparison to any one of the normal physical hard disks. The reason is that the duplicate data can be parallel-scanned. Generally it can be said that Level 1 provides nearly twice the read transfer rate of single disks and almost the same write transfer rate as single disks.
RAID 5 is an optimized compromise between Level 0 and Level 1, in terms of performance and redundancy. The hard disk space equals the number of disks used minus one. The data is distributed over the hard disks as with RAID 0. Parity blocks, created on one of the partitions, exist for security reasons. They are linked to each other with XOR, enabling the contents to be reconstructed by the corresponding parity block in case of system failure. With RAID 5, no more than one hard disk can fail at the same time. If one hard disk fails, it must be replaced as soon as possible to avoid the risk of losing data.
To further increase the reliability of the RAID system, it is possible to use RAID 6. In this level, even if two disks fail, the array still can be reconstructed. With RAID 6, at least 4 hard disks are needed to run the array. Note that when running as software raid, this configuration needs a considerable amount of CPU time and memory.
This RAID implementation combines features of RAID 0 and RAID 1: the data is first mirrored to separate disk arrays, which are inserted into a new RAID 0; type array. In each RAID 1 sub-array, one disk can fail without any damage to the data. A minimum of four disks and an even number of disks is needed to run a RAID 10. This type of RAID is used for database application where a huge load is expected.
Several other RAID levels have been developed (RAID 2, RAID 3, RAID 4, RAIDn, RAID 10, RAID 0+1, RAID 30, RAID 50, etc.), some being proprietary implementations created by hardware vendors. These levels are not very common and therefore are not explained here.
The YaST Section 5.1, “Using the YaST Partitioner”. This partitioning tool enables you to edit and delete existing partitions and create new ones to be used with soft RAID:
configuration can be reached from the YaST Expert Partitioner, described inSelect a hard disk from
.Change to the
tab.Click
and enter the desired size of the raid partition on this disk.Use
and change the to . Do not mount this partition.Repeat this procedure until you have defined all the desired physical volumes on the available disks.
For RAID 0 and RAID 1, at least two partitions are needed—for RAID 1, usually exactly two and no more. If RAID 5 is used, at least three partitions are required, RAID 6 and RAID 10 require at least four partitions. It is recommended to use partitions of the same size only. The RAID partitions should be located on different hard disks to decrease the risk of losing data if one is defective (RAID 1 and 5) and to optimize the performance of RAID 0. After creating all the partitions to use with RAID, click
› to start the RAID configuration.In the next dialog, choose between RAID levels 0, 1, 5, 6 and 10. Then, select all partitions with either the “Linux RAID” or “Linux native” type that should be used by the RAID system. No swap or DOS partitions are shown.
For RAID types where the order of added disks matters, you can mark individual disks with one of the letters A to E. Click the
button, select the disk and click of the buttons, where X is the letter you want to assign to the disk. Assign all available RAID disks this way, and confirm with . You can easily sort the classified disks with the or buttons, or add a sort pattern from a text file with .To add a previously unassigned partition to the selected RAID volume, first click the partition then
. Assign all partitions reserved for RAID. Otherwise, the space on the partition remains unused. After assigning all partitions, click to select the available .
In this last step, set the file system to use, encryption and
the mount point for the RAID volume. After completing the configuration
with /dev/md0
device and others indicated with RAID in the expert
partitioner.
Check the file /proc/mdstat
to find out whether a
RAID partition has been damaged. If th system fails, shut
down your Linux system and replace the defective hard disk with a new one
partitioned the same way. Then restart your system and enter the command
mdadm /dev/mdX --add /dev/sdX
. Replace 'X' with your
particular device identifiers. This integrates the hard disk
automatically into the RAID system and fully reconstructs it.
Note that although you can access all data during the rebuild, you may encounter some performance issues until the RAID has been fully rebuilt.
Configuration instructions and more details for soft RAID can be found in the HOWTOs at:
/usr/share/doc/packages/mdadm/Software-RAID.HOWTO.html
Linux RAID mailing lists are available, such as http://marc.info/?l=linux-raid.
openSUSE Leap supports the parallel installation of multiple kernel versions. When installing a second kernel, a boot entry and an initrd are automatically created, so no further manual configuration is needed. When rebooting the machine, the newly added kernel is available as an additional boot option.
Using this functionality, you can safely test kernel updates while being able to always fall back to the proven former kernel. To do so, do not use the update tools (such as the YaST Online Update or the updater applet), but instead follow the process described in this chapter.
Be aware that you lose your entire support entitlement for the machine when installing a self-compiled or a third-party kernel. Only kernels shipped with openSUSE Leap and kernels delivered via the official update channels for openSUSE Leap are supported.
It is recommended to check your boot loader configuration after having installed another kernel to set the default boot entry of your choice. See Section 12.3, “Configuring the Boot Loader with YaST” for more information.
Installing multiple versions of a software package (multiversion support) is enabled by default on SUSE Linux Enterprise 12. To verify this setting, proceed as follows:
Open /etc/zypp/zypp.conf
with the editor of your
choice as root
.
Search for the string multiversion
. If multiversion
is enabled for all kernel packages capable of this feature, the
following line appears uncommented:
multiversion = provides:multiversion(kernel)
To restrict multiversion support to certain kernel flavors, add the
package names as a comma-separated list to the
multiversion
option in
/etc/zypp/zypp.conf
—for example
multiversion = kernel-default,kernel-default-base,kernel-source
Save your changes.
Make sure that required vendor provided kernel modules (Kernel Module Packages) are also installed for the new updated kernel. The kernel update process will not warn about eventually missing kernel modules because package requirements are still fulfilled by the old kernel that is kept on the system.
When frequently testing new kernels with multiversion support enabled,
the boot menu quickly becomes confusing. Since a
/boot
partition usually has limited space you also
might run into trouble with /boot
overflowing.
While you may delete unused kernel versions manually with YaST or
Zypper (as described below), you can also configure
libzypp
to automatically delete
kernels no longer used. By default no kernels are deleted.
Open /etc/zypp/zypp.conf
with the editor of your
choice as root
.
Search for the string multiversion.kernels
and
activate this option by uncommenting the line. This option takes a
comma-separated list of the following values:
3.12.24-7.1
:
keep the kernel with the specified version number
latest
:
keep the kernel with the highest version number
latest-N
:
keep the kernel with the Nth highest version number
running
:
keep the running kernel
oldest
:
keep the kernel with the lowest version number (the one that was
originally shipped with openSUSE Leap)
oldest+N
.
keep the kernel with the Nth lowest version number
Here are some examples
multiversion.kernels = latest,running
Keep the latest kernel and the one currently running. This is similar to not enabling the multiversion feature, except that the old kernel is removed after the next reboot and not immediately after the installation.
multiversion.kernels = latest,latest-1,running
Keep the last two kernels and the one currently running.
multiversion.kernels = latest,running,3.12.25.rc7-test
Keep the latest kernel, the one currently running, and 3.12.25.rc7-test.
running
Kernel
Unless using special setups, you probably always want to keep the
running
Kernel. If not keeping the running Kernel,
it will be deleted in case of a Kernel update. This in turn makes it
necessary to immediately reboot the system after the update, since
modules for the Kernel that is currently running can no longer be
loaded since they have been deleted.
Start YaST and open the software manager via
› .List all packages capable of providing multiple versions by choosing
› › .Select a package and open its
tab in the bottom pane on the left.To install a package, click its check box. A green check mark indicates it is selected for installation.
To remove an already installed package (marked with a white check
mark), click its check box until a red X
indicates
it is selected for removal.
Click
to start the installation.
Use the command zypper se -s 'kernel*'
to display a
list of all kernel packages available:
S | Name | Type | Version | Arch | Repository --+----------------+------------+-----------------+--------+------------------- v | kernel-default | package | 2.6.32.10-0.4.1 | x86_64 | Alternative Kernel i | kernel-default | package | 2.6.32.9-0.5.1 | x86_64 | (System Packages) | kernel-default | srcpackage | 2.6.32.10-0.4.1 | noarch | Alternative Kernel i | kernel-default | package | 2.6.32.9-0.5.1 | x86_64 | (System Packages) ...
Specify the exact version when installing:
zypper in kernel-default-2.6.32.10-0.4.1
When uninstalling a kernel, use the commands zypper se -si
'kernel*'
to list all kernels installed and zypper
rm
PACKAGENAME-VERSION to remove
the package.
This chapter introduces GNOME configuration options which administrators can use to adjust system-wide settings, such as customizing menus, installing themes, configuring fonts, changing preferred applications, and locking down capabilities.
These configuration options are stored in the GConf system. Access the
GConf system with tools such as the gconftool-2
command
line interface or the gconf-editor
GUI tool.
To automatically start applications in GNOME, use one of the following methods:
To run applications for each user:
Put .desktop
files in
/usr/share/gnome/autostart
.
To run applications for an individual user:
Put .desktop
files in
~/.config/autostart
.
To disable an application that starts automatically, add
X-Autostart-enabled=false
to the
.desktop
file.
GNOME Files (nautilus
) monitors volume-related events and
responds with a user-specified policy. You can use GNOME Files to
automatically mount hotplugged drives and inserted removable media,
automatically run programs, and play audio CDs or video DVDs. GNOME Files
can also automatically import photos from a digital camera.
System administrators can set system-wide defaults. For more information, see Section 7.3, “Changing Preferred Applications”.
To change users' preferred applications, edit
/etc/gnome_defaults.conf
. Find further hints within
this file.
For more information about MIME types, see http://www.freedesktop.org/Standards/shared-mime-info-spec.
To add document templates for users, fill in the
Templates
directory in a user's home directory. You
can do this manually for each user by copying the files into
~/Templates
, or system-wide by adding a
Templates
directory with documents to
/etc/skel
before the user is created.
A user creates a new document from a template by right-clicking the desktop and selecting
.For more information, see http://help.gnome.org/admin/.
openSUSE® Leap is available for 64-bit platforms. This does not necessarily mean that all the applications included have already been ported to 64-bit platforms. openSUSE Leap supports the use of 32-bit applications in a 64-bit system environment. This chapter offers a brief overview of how this sup…
Booting a Linux system involves different components and tasks. The
hardware itself is initialized by the BIOS or the UEFI, which starts the
Kernel by means of a boot loader. After this point, the boot process is
completely controlled by the operating system and handled by systemd
.
systemd
provides a set of “targets” that boot setups for
everyday usage, maintenance or emergencies.
systemd
DaemonThe program systemd is the process with process ID 1. It is responsible for initializing the system in the required way. systemd is started directly by the Kernel and resists signal 9, which normally terminates processes. All other programs are either started directly by systemd or by one of its chi…
journalctl
: Query the systemd
Journal
When systemd
replaced traditional init scripts in SUSE Linux Enterprise 12 (see
Chapter 10, The systemd
Daemon), it introduced its own logging system
called journal. There is no need to run a
syslog
based service anymore, as all system
events are written in the journal.
This chapter describes how to configure GRUB 2, the boot loader used in openSUSE® Leap. It is the successor of the traditional GRUB boot loader—now called “GRUB 2 Legacy”. GRUB 2 has become the default boot loader in openSUSE® Leap since version 12. A YaST module is available for configuring the most important settings. The boot procedure as a whole is outlined in Chapter 9, Booting a Linux System. For details on Secure Boot support for UEFI machines see Chapter 14, UEFI (Unified Extensible Firmware Interface).
Linux offers the necessary networking tools and features for integration into all types of network structures. Network access using a network card can be configured with YaST. Manual configuration is also possible. In this chapter only the fundamental mechanisms and the relevant network configuration files are covered.
UEFI (Unified Extensible Firmware Interface) is the interface between the firmware that comes with the system hardware, all the hardware components of the system, and the operating system.
This chapter starts with information about various software packages,
the virtual consoles and the keyboard layout. We talk about software
components like bash
,
cron
and
logrotate
, because they were
changed or enhanced during the last release cycles. Even if they are
small or considered of minor importance, users should change their
default behavior, because these components are often closely coupled
with the system. The chapter concludes with a section about language and
country-specific settings (I18N and L10N).
udev
The kernel can add or remove almost any device in a running system. Changes in the device state (whether a device is plugged in or removed) need to be propagated to user space. Devices need to be configured as soon as they are plugged in and recognized. Users of a certain device need to be informed …
openSUSE® Leap is available for 64-bit platforms. This does not necessarily mean that all the applications included have already been ported to 64-bit platforms. openSUSE Leap supports the use of 32-bit applications in a 64-bit system environment. This chapter offers a brief overview of how this support is implemented on 64-bit openSUSE Leap platforms. It explains how 32-bit applications are executed (runtime support) and how 32-bit applications should be compiled to enable them to run both in 32-bit and 64-bit system environments. Additionally, find information about the kernel API and an explanation of how 32-bit applications can run under a 64-bit kernel.
openSUSE Leap for the 64-bit platforms amd64 and Intel 64 is designed so that existing 32-bit applications run in the 64-bit environment “out-of-the-box.” This support means that you can continue to use your preferred 32-bit applications without waiting for a corresponding 64-bit port to become available.
If an application is available both for 32-bit and 64-bit environments, parallel installation of both versions is bound to lead to problems. In such cases, decide on one of the two versions and install and use this.
An exception to this rule is PAM (pluggable authentication modules). openSUSE Leap uses PAM in the authentication process as a layer that mediates between user and application. On a 64-bit operating system that also runs 32-bit applications it is necessary to always install both versions of a PAM module.
To be executed correctly, every application requires a range of libraries. Unfortunately, the names for the 32-bit and 64-bit versions of these libraries are identical. They must be differentiated from each other in another way.
To retain compatibility with the 32-bit version, the libraries are stored
at the same place in the system as in the 32-bit environment. The 32-bit
version of libc.so.6
is located under
/lib/libc.so.6
in both the 32-bit and 64-bit
environments.
All 64-bit libraries and object files are located in directories called
lib64
. The 64-bit object files that you would
normally expect to find under /lib
and
/usr/lib
are now found under
/lib64
and /usr/lib64
. This
means that there is space for the 32-bit libraries under
/lib
and /usr/lib
, so the file
name for both versions can remain unchanged.
Subdirectories of 32-bit /lib
directories which
contain data content that does not depend on the word size are not moved.
This scheme conforms to LSB (Linux Standards Base) and FHS (File System
Hierarchy Standard).
All 64-bit architectures support the development of 64-bit objects. The
level of support for 32-bit compiling depends on the architecture. These
are the various implementation options for the tool chain from GCC (GNU
Compiler Collection) and binutils, which include the assembler
as
and the linker ld
:
Both 32-bit and 64-bit objects can be generated with a biarch
development tool chain. A biarch development tool chain allows
generation of 32-bit and 64-bit objects. The compilation of 64-bit
objects is the default on almost all platforms. 32-bit objects can be
generated if special flags are used. This special flag is
-m32
for GCC. The flags for the binutils are
architecture-dependent, but GCC transfers the correct flags to linkers
and assemblers. A biarch development tool chain currently exists for
amd64 (supports development for x86 and amd64 instructions), for
z Systems and for ppc64. 32-bit objects are normally created on the
ppc64 platform. The -m64
flag must be used to
generate 64-bit objects.
openSUSE Leap does not support the direct development of 32-bit software on all platforms. To develop applications for x86 under ia64, use the corresponding 32-bit version of openSUSE Leap.
All header files must be written in an architecture-independent form. The installed 32-bit and 64-bit libraries must have an API (application programming interface) that matches the installed header files. The normal openSUSE Leap environment is designed according to this principle. In the case of manually updated libraries, resolve these issues yourself.
To develop binaries for the other architecture on a biarch architecture,
the respective libraries for the second architecture must additionally be
installed. These packages are called
rpmname-32bit
. You also need the
respective headers and libraries from the
rpmname-devel
packages and the
development libraries for the second architecture from
rpmname-devel-32bit
.
For example, to compile a program that uses libaio
on a
system whose second architecture is a 32-bit architecture (x86_64), you need the following RPMs:
32-bit runtime package
Headers and libraries for 32-bit development
64-bit runtime package
64-bit development headers and libraries
Most open source programs use an autoconf
-based
program configuration. To use autoconf
for configuring
a program for the second architecture, overwrite the normal compiler and
linker settings of autoconf
by running the
configure
script with additional environment
variables.
The following example refers to an x86_64 system with x86 as the second architecture.
Use the 32-bit compiler:
CC="gcc -m32"
Instruct the linker to process 32-bit objects (always use
gcc
as the linker front-end):
LD="gcc -m32"
Set the assembler to generate 32-bit objects:
AS="gcc -c -m32"
Specify linker flags, such as the location of 32-bit libraries, for example:
LDFLAGS="-L/usr/lib"
Specify the location for the 32-bit object code libraries:
--libdir=/usr/lib
Specify the location for the 32-bit X libraries:
--x-libraries=/usr/lib
Not all of these variables are needed for every program. Adapt them to the respective program.
An example configure
call to compile a native 32-bit
application on x86_64
could appear as follows:
CC="gcc -m32" LDFLAGS="-L/usr/lib;" ./configure --prefix=/usr --libdir=/usr/lib --x-libraries=/usr/lib make make install
The 64-bit kernels for x86_64 offer both a 64-bit and a 32-bit kernel ABI (application binary interface). The latter is identical with the ABI for the corresponding 32-bit kernel. This means that the 32-bit application can communicate with the 64-bit kernel in the same way as with the 32-bit kernel.
The 32-bit emulation of system calls for a 64-bit kernel does not support
all the APIs used by system programs. This depends on the platform. For
this reason, few applications, like lspci
, must be
compiled.
A 64-bit kernel can only load 64-bit kernel modules that have been specially compiled for this kernel. It is not possible to use 32-bit kernel modules.
Some applications require separate kernel-loadable modules. If you intend to use such a 32-bit application in a 64-bit system environment, contact the provider of this application and SUSE to make sure that the 64-bit version of the kernel-loadable module and the 32-bit compiled version of the kernel API are available for this module.
Booting a Linux system involves different components and tasks. The
hardware itself is initialized by the BIOS or the UEFI, which starts the
Kernel by means of a boot loader. After this point, the boot process is
completely controlled by the operating system and handled by systemd
.
systemd
provides a set of “targets” that boot setups for
everyday usage, maintenance or emergencies.
The Linux boot process consists of several stages, each represented by a different component. The following list briefly summarizes the boot process and features all the major components involved:
BIOS/UEFI. After turning on the computer, the BIOS or the UEFI initializes the screen and keyboard, and tests the main memory. Up to this stage, the machine does not access any mass storage media. Subsequently, the information about the current date, time, and the most important peripherals are loaded from the CMOS values. When the first hard disk and its geometry are recognized, the system control passes from the BIOS to the boot loader. If the BIOS supports network booting, it is also possible to configure a boot server that provides the boot loader. On x86_64 systems, PXE boot is needed. Other architectures commonly use the BOOTP protocol to get the boot loader.
Boot Loader. The first physical 512-byte data sector of the first hard disk is loaded into the main memory and the boot loader that resides at the beginning of this sector takes over. The commands executed by the boot loader determine the remaining part of the boot process. Therefore, the first 512 bytes on the first hard disk are called the Master Boot Record (MBR). The boot loader then passes control to the actual operating system, in this case, the Linux Kernel. More information about GRUB 2, the Linux boot loader, can be found in Chapter 12, The Boot Loader GRUB 2. For a network boot, the BIOS acts as the boot loader. It gets the boot image from the boot server and starts the system. This is completely independent of local hard disks.
Kernel and initramfs
.
To pass system control, the boot loader loads both the Kernel and an
initial RAM-based file system (initramfs
)
into memory. The contents of the initramfs
can be used by the Kernel directly. initramfs
contains a small executable called init
that
handles the mounting of the real root file system. If special hardware
drivers are needed before the mass storage can be accessed, they must
be in initramfs
. For more information about
initramfs
, refer to
Section 9.2, “initramfs
”. If the system does not have a local
hard disk, the initramfs
must provide the
root file system for the Kernel. This can be done with the help of a
network block device like iSCSI or SAN, but it is also possible to use
NFS as the root device.
init
Process NamingTwo different programs are commonly named “init”:
the initramfs
process mounting the root
file system
the operating system process setting up the system
In this chapter we will therefore refer to them as
“init
on
initramfs
” and
“systemd
”, respectively.
init
on initramfs
.
This program performs all actions needed to mount the proper root file
system. It provides Kernel functionality for the needed file system
and device drivers for mass storage controllers with
udev
. After the root file
system has been found, it is checked for errors and mounted. If this
is successful, the initramfs
is cleaned and
the systemd
daemon on the root file system is executed. For more
information about init
on
initramfs
, refer to
Section 9.3, “Init on initramfs
”. Find more information about
udev
in
Chapter 16, Dynamic Kernel Device Management with udev
.
systemd
.
By starting services and mounting file systems, systemd
handles the
actual booting of the system. systemd
is described in
Chapter 10, The systemd
Daemon.
initramfs
#
initramfs
is a small cpio archive that the
Kernel can load into a RAM disk. It provides a minimal Linux environment
that enables the execution of programs before the actual root file system
is mounted. This minimal Linux environment is loaded into memory by BIOS
or UEFI routines and does not have specific hardware requirements other
than sufficient memory. The initramfs
archive
must always provide an executable named init
that executes the systemd
daemon on the root file system for the boot
process to proceed.
Before the root file system can be mounted and the operating system can
be started, the Kernel needs the corresponding drivers to access the
device on which the root file system is located. These drivers may
include special drivers for certain kinds of hard disks or even network
drivers to access a network file system. The needed modules for the root
file system may be loaded by init
on
initramfs
. After the modules are loaded,
udev
provides the
initramfs
with the needed devices. Later in the
boot process, after changing the root file system, it is necessary to
regenerate the devices. This is done by the systemd
unit
udev.service
with the command
udevtrigger
.
If you need to change hardware (for example hard disks) in an installed
system and this hardware requires different drivers to be in the
Kernel at boot time, you must update the
initramfs
file. This is done by calling
dracut
-f
(the option
-f
overwrites the existing initramfs file). To add a
driver for the new hardware, edit
/etc/dracut.conf.d/01-dist.conf
and add the
following line.
force_drivers+="driver1"
Replace driver1 with the module name of the
driver. If you need to add more than one driver, list them
space-separated (driver1
driver2
.
initramfs
or init
The boot loader loads initramfs
or
init
in the same way as the Kernel. It is not
necessary to re-install GRUB 2 after updating
initramfs
or init
,
because GRUB 2 searches the directory for the right file when booting.
If you change the values of some kernel variables via the
sysctl
interface by editing related files
(/etc/sysctl.conf
or
/etc/sysctl.d/*.conf
), the change will be lost on
the next system reboot. Even if you load the values with sysctl
--system
at runtime, the changes are not saved into the
initramfs file. You need to update it by calling
dracut
-f
(the option
-f
overwrites the existing initramfs file).
initramfs
#
The main purpose of init
on
initramfs
is to prepare the mounting of and
access to the real root file system. Depending on your system
configuration, init
on
initramfs
is responsible for the following
tasks.
Depending on your hardware configuration, special drivers may be needed to access the hardware components of your computer (the most important component being your hard disk). To access the final root file system, the Kernel needs to load the proper file system drivers.
For each loaded module, the Kernel generates device events.
udev
handles these events and
generates the required special block files on a RAM file system in
/dev
. Without those special files, the file
system and other devices would not be accessible.
If you configured your system to hold the root file system under RAID
or LVM, init
on
initramfs
sets up LVM or RAID to enable
access to the root file system later.
In case you want to change your /usr
or
swap
partitions directly without
the help of YaST, further actions are needed. If you
forget these steps, your system will start in emergency
mode. To avoid starting in emergency mode, perform the
following steps:
Edit the corresponding entry in
/etc/fstab
and replace your
previous partitons with the logical volume.
Execute the following commands:
root #
mount
-aroot #
swapon
-a
Regenerate your initial ram disk (initramfs) with
mkinitrd
or dracut
.
For z Systems, additionally run grub2-install
.
Find more information about RAID and LVM in Chapter 5, Advanced Disk Setup.
If you configured your system to use a network-mounted root file
system (mounted via NFS), init
on
initramfs
must make sure that the proper
network drivers are loaded and that they are set up to allow access to
the root file system.
If the file system resides on a network block device like iSCSI or
SAN, the connection to the storage server is also set up by
init
on initramfs
.
When init
on initramfs
is called during the initial boot as part of the installation process,
its tasks differ from those mentioned above:
When starting the installation process, your machine loads an
installation Kernel and a special init
containing the YaST installer. The YaST installer is running in a
RAM file system and needs to have information about the location of
the installation medium to access it for installing the
operating system.
As mentioned in Section 9.2, “initramfs
”, the boot process
starts with a minimum set of drivers that can be used with most
hardware configurations. init
starts an
initial hardware scanning process that determines the set of drivers
suitable for your hardware configuration. These drivers are used to
generate a custom initramfs
that is needed to
boot the system. If the modules are not needed for boot but for
coldplug, the modules can be loaded with systemd
; for more
information, see
Section 10.6.4, “Loading Kernel Modules”.
As soon as the hardware is properly recognized, the appropriate
drivers are loaded. The udev
program creates the special device files and
init
starts the installation system with the
YaST installer.
Finally, init
starts YaST, which starts
package installation and system configuration.
systemd
Daemon #
The program systemd
is the process with process ID 1. It is
responsible for initializing the system in the required way. systemd
is started directly by the Kernel and resists signal 9, which
normally terminates processes. All other programs are either started
directly by systemd or by one of its child processes.
Starting with openSUSE Leap 12 systemd is a replacement for the popular
System V init daemon. systemd
is fully compatible with System V init
(by supporting init scripts). One of the main advantages of systemd is
that it considerably speeds up boot time by aggressively paralleling
service starts. Furthermore, systemd only starts a service when it is
really needed. Daemons are not started unconditionally at boot time, but
rather when being required for the first time. systemd also supports
Kernel Control Groups (cgroups), snapshotting and restoring the system
state and more. See
http://www.freedesktop.org/wiki/Software/systemd/ for
details.
This section will go into detail about the concept behind systemd.
systemd is a system and session manager for Linux, compatible with System V and LSB init scripts. The main features are:
provides aggressive parallelization capabilities
uses socket and D-Bus activation for starting services
offers on-demand starting of daemons
keeps track of processes using Linux cgroups
supports snapshotting and restoring of the system state
maintains mount and automount points
implements an elaborate transactional dependency-based service control logic
A unit configuration file encodes information about a service, a socket, a device, a mount point, an automount point, a swap file or partition, a start-up target, a watched file system path, a timer controlled and supervised by systemd, a temporary system state snapshot, a resource management slice or a group of externally created processes. “Unit file” is a generic term used by systemd for the following:
Service. Information about a process (for example running a daemon); file ends with .service
Targets. Used for grouping units and as synchronization points during start-up; file ends with .target
Sockets.
Information about an IPC or network socket or a file system FIFO, for
socket-based activation (like
inetd
); file ends with
.socket
Path. Used to trigger other units (for example running a service when files change); file ends with .path
Timer. Information about a timer controlled, for timer-based activation; file ends with .timer
Mount point. Usually auto-generated by the fstab generator; file ends with .mount
Automount point. Information about a file system automount point; file ends with .automount
Swap. Information about a swap device or file for memory paging; file ends with .swap
Device. Information about a device unit as exposed in the sysfs/udev(7) device tree; file ends with .device
Scope / Slice. A concept for hierarchically managing resources of a group of processes; file ends with .scope/.slice
For more information about systemd.unit see http://www.freedesktop.org/software/systemd/man/systemd.unit.html
The System V init system uses several different commands to handle
services—the init scripts, insserv
,
telinit
and others. systemd makes it easier to manage
services, since there is only one command to memorize for the majority of
service-handling tasks: systemctl
. It uses the
“command plus subcommand” notation like
git
or zypper
:
systemctl [general OPTIONS] subcommand [subcommand OPTIONS]
See man 1 systemctl
for a complete manual.
If the output goes to a terminal (and not to a pipe or a file, for
example) systemd commands send long output to a pager by default. Use
the --no-pager
option to turn off paging mode.
systemd also supports bash-completion, allowing you to enter the first
letters of a subcommand and then press →| to
automatically complete it. This feature is only available in the
bash
shell and requires the installation of the
package bash-completion
.
Subcommands for managing services are the same as for managing a service
with System V init (start
, stop
,
...). The general syntax for service management commands is as follows:
systemctl reload|restart|start|status|stop|... <my_service(s)>
rc<my_service(s)> reload|restart|start|status|stop|...
systemd allows you to manage several services in one go. Instead of executing init scripts one after the other as with System V init, execute a command like the following:
systemctl start <my_1st_service> <my_2nd_service>
If you want to list all services available on the system:
systemctl list-unit-files --type=service
The following table lists the most important service management commands for systemd and System V init:
Task |
systemd Command |
System V init Command |
---|---|---|
Starting. |
start |
start |
Stopping. |
stop |
stop |
Restarting. Shuts down services and starts them afterwards. If a service is not yet running it will be started. |
restart |
restart |
Restarting conditionally. Restarts services if they are currently running. Does nothing for services that are not running. |
try-restart |
try-restart |
Reloading.
Tells services to reload their configuration files without
interrupting operation. Use case: Tell Apache to reload a modified
|
reload |
reload |
Reloading or restarting. Reloads services if reloading is supported, otherwise restarts them. If a service is not yet running it will be started. |
reload-or-restart |
n/a |
Reloading or restarting conditionally. Reloads services if reloading is supported, otherwise restarts them if currently running. Does nothing for services that are not running. |
reload-or-try-restart |
n/a |
Getting detailed status information.
Lists information about the status of services. The |
status |
status |
Getting short status information. Shows whether services are active or not. |
is-active |
status |
The service management commands mentioned in the previous section let you manipulate services for the current session. systemd also lets you permanently enable or disable services, so they are automatically started when requested or are always unavailable. You can either do this by using YaST, or on the command line.
The following table lists enabling and disabling commands for systemd and System V init:
When enabling a service on the command line, it is not started
automatically. It is scheduled to be started with the next system
start-up or runlevel/target change. To immediately start a service
after having enabled it, explicitly run systemctl
start
<my_service>
or
rc <my_service>
start
.
Task |
|
System V init Command |
---|---|---|
Enabling. |
|
|
Disabling. |
|
|
Checking. Shows whether a service is enabled or not. |
|
n/a |
Re-enabling. Similar to restarting a service, this commands first disables and then enables a service. Useful to re-enable a service with its defaults. |
|
n/a |
Masking. After “disabling” a service, it can still be started manually. To completely disable a service, you need to mask it. Use with care. |
|
n/a |
Unmasking. A service that has been masked can only be used again after it has been unmasked. |
|
n/a |
The entire process of starting the system and shutting it down is maintained by systemd. From this point of view, the Kernel can be considered a background process to maintain all other processes and adjust CPU time and hardware access according to requests from other programs.
With System V init the system was booted into a so-called
“Runlevel”. A runlevel defines how the system is started
and what services are available in the running system. Runlevels are
numbered; the most commonly known ones are 0
(shutting down the system), 3
(multiuser with
network) and 5
(multiuser with network and display
manager).
systemd introduces a new concept by using so-called “target
units”. However, it remains fully compatible with the runlevel
concept. Target units are named rather than numbered and serve specific
purposes. For example, the targets
local-fs.target
and
swap.target
mount local file systems and swap
spaces.
The target graphical.target
provides a
multiuser system with network and display manager capabilities and is
equivalent to runlevel 5. Complex targets, such as
graphical.target
act as “meta”
targets by combining a subset of other targets. Since systemd makes it
easy to create custom targets by combining existing targets, it offers
great flexibility.
The following list shows the most important systemd target units. For a
full list refer to man 7 systemd.special
.
default.target
The target that is booted by default. Not a “real”
target, but rather a symbolic link to another target like
graphic.target
. Can be permanently changed
via YaST (see Section 10.4, “Managing Services with YaST”). To
change it for a session, use the Kernel command line option
systemd.unit=<my_target>.target
at the boot prompt.
emergency.target
Starts an emergency shell on the console. Only use it at the boot
prompt as systemd.unit=emergency.target
.
graphical.target
Starts a system with network, multiuser support and a display manager.
halt.target
Shuts down the system.
mail-transfer-agent.target
Starts all services necessary for sending and receiving mails.
multi-user.target
Starts a multiuser system with network.
reboot.target
Reboots the system.
rescue.target
Starts a single-user system without network.
To remain compatible with the System V init runlevel system, systemd
provides special targets named
runlevelX.target
mapping
the corresponding runlevels numbered X.
If you want to know the current target, use the command:
systemctl get-default
systemd
Target Units #
System V runlevel |
|
Purpose |
---|---|---|
0 |
|
System shutdown |
1, S |
|
Single-user mode |
2 |
|
Local multiuser without remote network |
3 |
|
Full multiuser with network |
4 |
|
Unused/User-defined |
5 |
|
Full multiuser with network and display manager |
6 |
|
System reboot |
/etc/inittab
The runlevels in a System V init system are configured in
/etc/inittab
. systemd does
not use this configuration. Refer to
Section 10.5.3, “Creating Custom Targets” for instructions on
how to create your own bootable target.
Use the following commands to operate with target units:
Task |
systemd Command |
System V init Command |
---|---|---|
Change the current target/runlevel |
|
|
Change to the default target/runlevel |
|
n/a |
Get the current target/runlevel |
With systemd there is usually more than one active target. The command lists all currently active targets. |
or
|
persistently change the default runlevel |
Use the Services Manager or run the following command:
|
Use the Services Manager or change the line
in |
Change the default runlevel for the current boot process |
Enter the following option at the boot prompt
|
Enter the desired runlevel number at the boot prompt. |
Show a target's/runlevel's dependencies |
“Requires” lists the hard dependencies (the ones that must be resolved), whereas “Wants” lists the soft dependencies (the ones that get resolved if possible). |
n/a |
systemd offers the means to analyze the system start-up process. You can
conveniently review the list of all services and their status (rather
than having to parse /varlog/
). systemd also allows
you to scan the start-up procedure to find out how much time each
service start-up consumes.
To review the complete list of services that have been started since
booting the system, enter the command systemctl
. It
lists all active services like shown below (shortened). To get more
information on a specific service, use systemctl
status <my_service>
.
root #
systemctl
UNIT LOAD ACTIVE SUB JOB DESCRIPTION
[...]
iscsi.service loaded active exited Login and scanning of iSC+
kmod-static-nodes.service loaded active exited Create list of required s+
libvirtd.service loaded active running Virtualization daemon
nscd.service loaded active running Name Service Cache Daemon
ntpd.service loaded active running NTP Server Daemon
polkit.service loaded active running Authorization Manager
postfix.service loaded active running Postfix Mail Transport Ag+
rc-local.service loaded active exited /etc/init.d/boot.local Co+
rsyslog.service loaded active running System Logging Service
[...]
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
161 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
To restrict the output to services that failed to start, use the
--failed
option:
root #
systemctl --failed
UNIT LOAD ACTIVE SUB JOB DESCRIPTION
apache2.service loaded failed failed apache
NetworkManager.service loaded failed failed Network Manager
plymouth-start.service loaded failed failed Show Plymouth Boot Screen
[...]
To debug system start-up time, systemd offers the
systemd-analyze
command. It shows the total start-up
time, a list of services ordered by start-up time and can also generate
an SVG graphic showing the time services took to start in relation to
the other services.
root #
systemd-analyze
Startup finished in 2666ms (kernel) + 21961ms (userspace) = 24628ms
root #
systemd-analyze blame
6472ms systemd-modules-load.service
5833ms remount-rootfs.service
4597ms network.service
4254ms systemd-vconsole-setup.service
4096ms postfix.service
2998ms xdm.service
2483ms localnet.service
2470ms SuSEfirewall2_init.service
2189ms avahi-daemon.service
2120ms systemd-logind.service
1210ms xinetd.service
1080ms ntp.service
[...]
75ms fbset.service
72ms purge-kernels.service
47ms dev-vda1.swap
38ms bluez-coldplug.service
35ms splash_early.service
root #
systemd-analyze plot > jupiter.example.com-startup.svg
The above-mentioned commands let you review the services that started
and the time it took to start them. If you need to know more details,
you can tell systemd
to verbosely log the complete start-up
procedure by entering the following parameters at the boot prompt:
systemd.log_level=debug systemd.log_target=kmsg
Now systemd
writes its log messages into the kernel ring buffer.
View that buffer with dmesg
:
dmesg -T | less
Systemd is compatible with System V, allowing you to still use existing
System V init scripts. However, there is at least one known issue where
a System V init script does not work with Systemd out of the box:
starting a service as a different user via su
or
sudo
in init scripts will result in a failure of the
script, producing an “Access denied” error.
When changing the user with su
or
sudo
, a PAM session is started. This session will be
terminated after the init script is finished. As a consequence, the
service that has been started by the init script will also be
terminated. To work around this error, proceed as follows:
Create a service file wrapper with the same name as the init script
plus the file name extension .service
:
[Unit] Description=DESCRIPTION After=network.target [Service] User=USER Type=forking1 PIDFile=PATH TO PID FILE1 ExecStart=PATH TO INIT SCRIPT start ExecStop=PATH TO INIT SCRIPT stop ExecStopPost=/usr/bin/rm -f PATH TO PID FILE1 [Install] WantedBy=multi-user.target2
Replace all values written in UPPERCASE LETTERS with appropriate values.
Start the daemon with systemctl start
APPLICATION
.
Basic service management can also be done with the YaST Services Manager module. It supports starting, stopping, enabling and disabling services. It also lets you show a service's status and change the default target. Start the YaST module with
› › .To change the target the system boots into, choose a target from the
drop-down box. The most often used targets are (starting a graphical login screen) and (starting the system in command line mode).Select a service from the table. The
column shows whether it is currently running ( ) or not ( ). Toggle its status by choosing .Starting or stopping a service changes its status for the currently running session. To change its status throughout a reboot, you need to enable or disable it.
Select a service from the table. The
column shows whether it is currently or . Toggle its status by choosing .By enabling or disabling a service you configure whether it is started during booting (
) or not ( ). This setting will not affect the current session. To change its status in the current session, you need to start or stop it.
To view the status message of a service, select it from the list and
choose systemctl
-l
status
<my_service>.
Faulty runlevel settings may make your system unusable. Before applying your changes, make absolutely sure that you know their consequences.
systemd
#
The following sections contain some examples for
systemd
customizations.
Always do systemd customization in /etc/systemd/
,
never in /usr/lib/systemd/
.
Otherwise your changes will be overwritten by the next update of
systemd.
The systemd service files are located in
/usr/lib/systemd/system
. If you want to customize
them, proceed as follows:
Copy the files you want to modify from
/usr/lib/systemd/system
to
/etc/systemd/system
. Keep the file names
identical to the original ones.
Modify the copies in /etc/systemd/system
according to your needs.
For an overview of your configuration changes, use the
systemd-delta
command. It can compare and identify
configuration files that override other configuration files. For
details, refer to the systemd-delta
man page.
The modified files in /etc/systemd
will take
precedence over the original files in
/usr/lib/systemd/system
, provided that their file
name is the same.
If you only want to add a few lines to a configuration file or modify a small part of it, you can use so-called “drop-in” files. Drop-in files let you extend the configuration of unit files without having to edit or override the unit files themselves.
For example, to change one value for the foobar
service located in
/usr/lib/systemd/system/foobar.service
,
proceed as follows:
Create a directory called
/etc/systemd/system/<my_service>.service.d/
.
Note the .d
suffix. The directory must otherwise be
named like the service that you want to patch with the drop-in file.
In that directory, create a file
whatevermodification.conf
.
Make sure it only contains the line with the value that you want to modify.
Save your changes to the file. It will be used as an extension of the original file.
On System V init SUSE systems, runlevel 4 is unused to allow
administrators to create their own runlevel configuration. systemd
allows you to create any number of custom targets. It is suggested to
start by adapting an existing target such as
graphical.target
.
Copy the configuration file
/usr/lib/systemd/system/graphical.target
to
/etc/systemd/system/<my_target>.target
and adjust it according to your needs.
The configuration file copied in the previous step already covers the
required (“hard”) dependencies for the target. To also
cover the wanted (“soft”) dependencies, create a
directory
/etc/systemd/system/<my_target>.target.wants
.
For each wanted service, create a symbolic link from
/usr/lib/systemd/system
into
/etc/systemd/system/<my_target>.target.wants
.
Once you have finished setting up the target, reload the systemd configuration to make the new target available:
systemctl daemon-reload
The following sections cover advanced topics for system administrators. For even more advanced systemd documentation, refer to Lennart Pöttering's series about systemd for administrators at http://0pointer.de/blog/projects.
systemd
supports cleaning temporary directories regularly. The
configuration from the previous system version is automatically migrated
and active. tmpfiles.d
—which is responsible
for managing temporary files—reads its configuration from
/etc/tmpfiles.d/*.conf
,
/run/tmpfiles.d/*.conf
, and
/usr/lib/tmpfiles.d/*.conf
files. Configuration
placed in /etc/tmpfiles.d/*.conf
overrides related
configurations from the other two directories
(/usr/lib/tmpfiles.d/*.conf
is where packages store
their configuration files).
The configuration format is one line per path containing action and path, and optionally mode, ownership, age and argument fields, depending on the action. The following example unlinks the X11 lock files:
Type Path Mode UID GID Age Argument r /tmp/.X[0-9]*-lock
To get the status the tmpfile timer:
systemctl status systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-clean.timer; static) Active: active (waiting) since Tue 2014-09-09 15:30:36 CEST; 1 weeks 6 days ago Docs: man:tmpfiles.d(5) man:systemd-tmpfiles(8) Sep 09 15:30:36 jupiter systemd[1]: Starting Daily Cleanup of Temporary Directories. Sep 09 15:30:36 jupiter systemd[1]: Started Daily Cleanup of Temporary Directories.
For more information on temporary files handling, see man 5
tmpfiles.d
.
Section 10.6.8, “Debugging Services” explains
how to view log messages for a given service. However, displaying log
messages is not restricted to service logs. You can also access and
query the complete log messages written by systemd
—the
so-called “Journal”. Use the command
systemd-journalctl
to display the complete log
messages starting with the oldest entries. Refer to man 1
systemd-journalctl
for options such as applying filters or
changing the output format.
You can save the current state of systemd
to a named snapshot and
later revert to it with the isolate
subcommand. This
is useful when testing services or custom targets, because it allows you
to return to a defined state at any time. A snapshot is only available
in the current session and will automatically be deleted on reboot. A
snapshot name must end in .snapshot
.
systemctl snapshot <my_snapshot>.snapshot
systemctl delete <my_snapshot>.snapshot
systemctl show <my_snapshot>.snapshot
systemctl isolate <my_snapshot>.snapshot
With systemd
, kernel modules can automatically be loaded at boot
time via a configuration file in
/etc/modules-load.d
. The file should be named
module.conf and have the following content:
# load module module at boot time module
In case a package installes a configuration file for loading a Kernel
module, the file gets installed to
/usr/lib/modules-load.d
. If two configuration files
with the same name exist, the one in
/etc/modules-load.d
tales precedence.
For more information, see the modules-load.d(5)
man page.
With System V init actions that need to be performed before loading a
service, needed to be specified in /etc/init.d/before.local
. This procedure is no longer supported with systemd. If you
need to do actions before starting services, do the following:
Create a drop-in file in /etc/modules-load.d
directory (see man modules-load.d
for the syntax)
Create a drop-in file in /etc/tmpfiles.d
(see
man tmpfiles.d
for the syntax)
Create a system service file, for example
/etc/systemd/system/before.service
, from the
following template:
[Unit] Before=NAME OF THE SERVICE YOU WANT THIS SERVICE TO BE STARTED BEFORE [Service] Type=oneshot RemainAfterExit=true ExecStart=YOUR_COMMAND # beware, executable is run directly, not through a shell, check the man pages # systemd.service and systemd.unit for full syntax [Install] # target in which to start the service WantedBy=multi-user.target #WantedBy=graphical.target
When the service file is created, you should run the following
commands (as root
):
systemctl daemon-reload systemctl enable before
Every time you modify the service file, you need to run:
systemctl daemon-reload
On a traditional System V init system it is not always possible to clearly assign a process to the service that spawned it. Some services, such as Apache, spawn a lot of third-party processes (for example CGI or Java processes), which themselves spawn more processes. This makes a clear assignment difficult or even impossible. Additionally, a service may not terminate correctly, leaving some children alive.
systemd solves this problem by placing each service into its own cgroup. cgroups are a Kernel feature that allows aggregating processes and all their children into hierarchical organized groups. systemd names each cgroup after its service. Since a non-privileged process is not allowed to “leave” its cgroup, this provides an effective way to label all processes spawned by a service with the name of the service.
To list all processes belonging to a service, use the command
systemd-cgls
. The result will look like the following
(shortened) example:
root #
systemd-cgls --no-pager
├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 20
├─user.slice
│ └─user-1000.slice
│ ├─session-102.scope
│ │ ├─12426 gdm-session-worker [pam/gdm-password]
│ │ ├─15831 gdm-session-worker [pam/gdm-password]
│ │ ├─15839 gdm-session-worker [pam/gdm-password]
│ │ ├─15858 /usr/lib/gnome-terminal-server
[...]
└─system.slice
├─systemd-hostnamed.service
│ └─17616 /usr/lib/systemd/systemd-hostnamed
├─cron.service
│ └─1689 /usr/sbin/cron -n
├─ntpd.service
│ └─1328 /usr/sbin/ntpd -p /var/run/ntp/ntpd.pid -g -u ntp:ntp -c /etc/ntp.conf
├─postfix.service
│ ├─ 1676 /usr/lib/postfix/master -w
│ ├─ 1679 qmgr -l -t fifo -u
│ └─15590 pickup -l -t fifo -u
├─sshd.service
│ └─1436 /usr/sbin/sshd -D
[...]
See Book “System Analysis and Tuning Guide”, Chapter 9 “Kernel Control Groups” for more information about cgroups.
As explained in Section 10.6.6, “Kernel Control Groups (cgroups)”, it is not always possible to assign a process to its parent service process in a System V init system. This makes it difficult to terminate a service and all of its children. Child processes that have not been terminated will remain as zombie processes.
systemd's concept of confining each service into a cgroup makes it
possible to clearly identify all child processes of a service and
therefore allows you to send a signal to each of these processes. Use
systemctl kill
to send signals to services. For a
list of available signals refer to man 7 signals
.
SIGTERM
to a Service
SIGTERM
is the default signal that is sent.
systemctl kill <my_service>
Use the -s
option to specify the signal that should
be sent.
systemctl kill -s SIGNAL <my_service>
By default the kill
command sends the signal to
all
processes of the specified cgroup. You can
restrict it to the control
or the
main
process. The latter is for example useful to
force a service to reload its configuration by sending
SIGHUP
:
systemctl kill -s SIGHUP --kill-who=main <my_service>
By default, systemd is not overly verbose. If a service was started
successfully, no output will be produced. In case of a failure, a short
error message will be displayed. However, systemctl
status
provides means to debug start-up and operation of a
service.
systemd comes with its own logging mechanism (“The
Journal”) that logs system messages. This allows you to display
the service messages together with status messages. The
status
command works similar to
tail
and can also display the log messages in
different formats, making it a powerful debugging tool.
Whenever a service fails to start, use systemctl status
<my_service>
to get a
detailed error message:
root #
systemctl start apache2 Job failed. See system journal and 'systemctl status' for details.root #
systemctl status apache2 Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled) Active: failed (Result: exit-code) since Mon, 04 Jun 2012 16:52:26 +0200; 29s ago Process: 3088 ExecStart=/usr/sbin/start_apache2 -D SYSTEMD -k start (code=exited, status=1/FAILURE) CGroup: name=systemd:/system/apache2.service Jun 04 16:52:26 g144 start_apache2[3088]: httpd2-prefork: Syntax error on line 205 of /etc/apache2/httpd.conf: Syntax error on li...alHost>
The default behavior of the status
subcommand is
to display the last ten messages a service issued. To change the
number of messages to show, use the
--lines=n
parameter:
systemctl status ntp systemctl --lines=20 status ntp
To display a “live stream” of service messages, use the
--follow
option, which works like
tail
-f
:
systemctl --follow status ntp
The --output=mode
parameter allows you to change the output format of service messages.
The most important modes available are:
short
The default format. Shows the log messages with a human readable time stamp.
verbose
Full output with all fields.
cat
Terse output without time stamps.
For more information on systemd refer to the following online resources:
Lennart Pöttering, one of the systemd authors, has written a series of blog entries (13 at the time of writing this chapter). Find them at http://0pointer.de/blog/projects.
journalctl
: Query the systemd
Journal #
When systemd
replaced traditional init scripts in SUSE Linux Enterprise 12 (see
Chapter 10, The systemd
Daemon), it introduced its own logging system
called journal. There is no need to run a
syslog
based service anymore, as all system
events are written in the journal.
The journal itself is a system service managed by systemd
. Its full
name is systemd-journald.service
. It collects and
stores logging data by maintaining structured indexed journals based on
logging information received from the kernel, from user processes, from
standard input and error of system services. The
systemd-journald
service is on by default:
# systemctl status systemd-journald systemd-journald.service - Journal Service Loaded: loaded (/usr/lib/systemd/system/systemd-journald.service; static) Active: active (running) since Mon 2014-05-26 08:36:59 EDT; 3 days ago Docs: man:systemd-journald.service(8) man:journald.conf(5) Main PID: 413 (systemd-journal) Status: "Processing requests..." CGroup: /system.slice/systemd-journald.service └─413 /usr/lib/systemd/systemd-journald [...]
The journal stores log data in /run/log/journal/
by
default. Because the /run/
directory is volatile by
nature, log data is lost at reboot. To make the log data persistent, the
directory /var/log/journal/
with correct ownership
and permissions must exist, where the systemd-journald service can store
its data. systemd
will create the directory for you—and
switch to persistent logging—if you do the following:
As root
, open /etc/systemd/journald.conf
for editing.
# vi /etc/systemd/journald.conf
Uncomment the line containing Storage=
and change it
to
[...] [Journal] Storage=persistent #Compress=yes [...]
Save the file and restart systemd-journald:
systemctl restart systemd-journald
journalctl
Useful Switches #
This section introduces several common useful options to enhance the
default journalctl
behavior. All switches are
described in the journalctl
manual page, man
1 journalctl
.
To show all journal messages related to a specific executable, specify the full path to the executable:
journalctl /usr/lib/systemd/systemd
Shows only the most recent journal messages, and prints new log entries as they are added to the journal.
Prints the messages and jumps to the end of the journal, so that the latest entries are visible within the pager.
Prints the messages of the journal in a reverse order, so that the latest entries are listed first.
Shows only kernel messages. This is equivalent to the field match
_TRANSPORT=kernel
(see
Section 11.3.3, “Filtering Based on Fields”).
Shows only messages for the specified systemd
unit. This is
equivalent to the field match
_SYSTEMD_UNIT=UNIT
(see
Section 11.3.3, “Filtering Based on Fields”).
# journalctl -u apache2 [...] Jun 03 10:07:11 pinkiepie systemd[1]: Starting The Apache Webserver... Jun 03 10:07:12 pinkiepie systemd[1]: Started The Apache Webserver.
When called without switches, journalctl
shows the
full content of the journal, the oldest entries listed first. The output
can be filtered by specific switches and fields.
journalctl
can filter messages based on a specific
system boot. To list all available boots, run
# journalctl --list-boots -1 097ed2cd99124a2391d2cffab1b566f0 Mon 2014-05-26 08:36:56 EDT—Fri 2014-05-30 05:33:44 EDT 0 156019a44a774a0bb0148a92df4af81b Fri 2014-05-30 05:34:09 EDT—Fri 2014-05-30 06:15:01 EDT
The first column lists the boot offset: 0
for the
current boot, -1
for the previous,
-2
for the second previous, etc. The second column
contains the boot ID, and then the limiting time stamps of the specific
boot follow.
Show all messages from the current boot:
# journalctl -b
If you need to see journal message from the previous boot, add an offset parameter. The following example outputs the previous boot messages:
# journalctl -b -1
Another way is to list boot messages based on the boot ID. For this purpose, use the _BOOT_ID field:
# journalctl _BOOT_ID=156019a44a774a0bb0148a92df4af81b
You can filter the output of journalctl
by specifying
the starting and/or ending date. The date specification should be of the
format "2014-06-30 9:17:16". If the time part is omitted, the midnight
is assumed. If seconds are omitted, ":00" is assumed. If the date part
is omitted, the current day is assumed. Instead of numeric expression,
you can specify the keywords "yesterday", "today", or "tomorrow", which
refer to midnight of the day before the current day, of the current day,
or of the day after the current day. If you specify "now", it refers to
the current time. You can also specify relative times prefixed with
-
or +
, referring to times before
or after the current time.
Show only new messages since now, and update the output continuously:
# journalctl --since "now" -f
Show all messages since last midnight till 3:20am:
# journalctl --since "today" --until "3:20"
You can filter the output of the journal by specific fields. The syntax
of a field to be matched is FIELD_NAME=MATCHED_VALUE
,
such as _SYSTEMD_UNIT=httpd.service
. You can specify
multiple matches in a single query to filter the output messages event
more. See man 7 systemd.journal-fields
for a list of
default fields.
Show messages produced by a specific process ID:
# journalctl _PID=1039
Show messages belonging to a specific user ID:
# journalctl _UID=1000
Show messages from the kernel ring buffer (the same as
dmesg
produces):
# journalctl _TRANSPORT=kernel
Show messages from the service's standard or error output:
# journalctl _TRANSPORT=stdout
Show messages produced by a specified service only:
# journalctl _SYSTEMD_UNIT=avahi-daemon.service
If two different fields are specified, only entries that match both expressions at the same time are shown:
# journalctl _SYSTEMD_UNIT=avahi-daemon.service _PID=1488
If two matches refer to the same field, all entries matching either expression are shown:
# journalctl _SYSTEMD_UNIT=avahi-daemon.service _SYSTEMD_UNIT=dbus.service
You can use the '+' separator to combine two expressions in a logical 'OR'. The following example shows all messages from the Avahi service process with the process ID 1480 together with all messages from the D-Bus service:
# journalctl _SYSTEMD_UNIT=avahi-daemon.service _PID=1480 + _SYSTEMD_UNIT=dbus.service
systemd
Errors #
This section introduces a simple example to illustrate how to find and
fix the error reported by systemd
during apache2
start-up.
Try to start the apache2 service:
# systemctl start apache2 Job for apache2.service failed. See 'systemctl status apache2' and 'journalctl -xn' for details.
Let us see what the service's status says:
# systemctl status apache2 apache2.service - The Apache Webserver Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled) Active: failed (Result: exit-code) since Tue 2014-06-03 11:08:13 CEST; 7min ago Process: 11026 ExecStop=/usr/sbin/start_apache2 -D SYSTEMD -DFOREGROUND \ -k graceful-stop (code=exited, status=1/FAILURE)
The ID of the process causing the failure is 11026.
Show the verbose version of messages related to process ID 11026:
# journalctl -o verbose _PID=11026 [...] MESSAGE=AH00526: Syntax error on line 6 of /etc/apache2/default-server.conf: [...] MESSAGE=Invalid command 'DocumenttRoot', perhaps misspelled or defined by a module [...]
Fix the typo inside
/etc/apache2/default-server.conf
, start the
apache2 service, and print its status:
# systemctl start apache2 && systemctl status apache2 apache2.service - The Apache Webserver Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled) Active: active (running) since Tue 2014-06-03 11:26:24 CEST; 4ms ago Process: 11026 ExecStop=/usr/sbin/start_apache2 -D SYSTEMD -DFOREGROUND -k graceful-stop (code=exited, status=1/FAILURE) Main PID: 11263 (httpd2-prefork) Status: "Processing requests..." CGroup: /system.slice/apache2.service ├─11263 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...] ├─11280 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...] ├─11281 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...] ├─11282 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...] ├─11283 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...] └─11285 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
The behavior of the systemd-journald service can be adjusted by modifying
/etc/systemd/journald.conf
. This section introduces
only basic option settings. For a complete file description, see
man 5 journald.conf
. Note that you need to restart the
journal for the changes to take effect with
# systemctl restart systemd-journald
If the journal log data is saved to a persistent location (see
Section 11.1, “Making the Journal Persistent”), it uses up to 10% of the file
system the /var/log/journal
resides on. For
example, if /var/log/journal
is located on a 30 GB
/var
partition, the journal may use up to 3 GB of
the disk space. To change this limit, change (and uncomment) the
SystemMaxUse
option:
SystemMaxUse=50M
/dev/ttyX
#
You can forward the journal to a terminal device to inform you about
system messages on a preferred terminal screen, for example
/dev/tty12
. Change the following journald options to
ForwardToConsole=yes TTYPath=/dev/tty12
Journald is backward compatible with traditional syslog implementations
such as rsyslog
. Make sure the following is
valid:
rsyslog is installed.
# rpm -q rsyslog rsyslog-7.4.8-2.16.x86_64
rsyslog service is enabled.
# systemctl is-enabled rsyslog enabled
Forwarding to syslog is enabled in
/etc/systemd/journald.conf
.
ForwardToSyslog=yes
systemd
Journal #For an easy way of filtering the systemd journal (without having to deal with the journalctl
syntax), you can use the YaST journal module. After installing it with sudo zypper in
yast2-journal
, start it from YaST by selecting › . Alternatively, start it from command line by entering sudo yast2 journal
.
The module displays the log entries in a table. The search box on top allows you to search
for entries that contain certain characters, similar to using grep
. To filter the
entries by date and time, unit, file, or priority, click and set the
respective options.
This chapter describes how to configure GRUB 2, the boot loader used in openSUSE® Leap. It is the successor of the traditional GRUB boot loader—now called “GRUB 2 Legacy”. GRUB 2 has become the default boot loader in openSUSE® Leap since version 12. A YaST module is available for configuring the most important settings. The boot procedure as a whole is outlined in Chapter 9, Booting a Linux System. For details on Secure Boot support for UEFI machines see Chapter 14, UEFI (Unified Extensible Firmware Interface).
The configuration is stored in different files.
More file systems are supported (for example, Btrfs).
Can directly read files stored on LVM or RAID devices.
The user interface can be translated and altered with themes.
Includes a mechanism for loading modules to support additional features, such as file systems, etc.
Automatically searches for and generates boot entries for other kernels and operating systems, such as Windows.
Includes a minimal Bash-like console.
The configuration of GRUB 2 is based on the following files:
/boot/grub2/grub.cfg
This file contains the configuration of the GRUB 2 menu items. It
replaces menu.lst
used in GRUB Legacy.
grub.cfg
is automatically generated by the
grub2-mkconfig
command, and should not be
edited.
/boot/grub2/custom.cfg
This optional file is directly sourced by
grub.cfg
at boot time and can be used to add
custom items to the boot menu.
/etc/default/grub
This file controls the user settings of GRUB 2 and usually includes additional environmental settings such as backgrounds and themes.
/etc/grub.d/
The scripts in this directory are read during execution of the
grub2-mkconfig
command. Their instructions are
integrated into the main configuration file
/boot/grub/grub.cfg
.
/etc/sysconfig/bootloader
This configuration file is used when configuring the boot loader with
YaST and every time a new kernel is installed. It is evaluated by
the perl-bootloader which modifies the boot loader configuration file
(for example /boot/grub2/grub.cfg
for GRUB 2)
accordingly. /etc/sysconfig/bootloader
is not a
GRUB 2-specific configuration file—the values are applied
to any boot loader installed on openSUSE Leap.
/boot/grub2/x86_64-efi
,
,
These configuration files contain architecture-specific options.
GRUB 2 can be controlled in various ways. Boot entries from an
existing configuration can be selected from the graphical menu (splash
screen). The configuration is loaded from the file
/boot/grub2/grub.cfg
which is compiled from other
configuration files (see below). All GRUB 2 configuration files are
considered system files, and you need root
privileges to edit
them.
After having manually edited GRUB 2 configuration files, you need to
run grub2-mkconfig
to activate the changes.
However, this is not necessary when changing the configuration with
YaST, since it will automatically run
grub2-mkconfig
.
/boot/grub2/grub.cfg
#
The graphical splash screen with the boot menu is based on the GRUB 2
configuration file /boot/grub2/grub.cfg
, which
contains information about all partitions or operating systems that can
be booted by the menu.
Every time the system is booted, GRUB 2 loads the menu file directly
from the file system. For this reason, GRUB 2 does not need to be
re-installed after changes to the configuration file.
grub.cfg
is automatically rebuilt with kernel
installations or removals.
grub.cfg
is compiled by the
grub2-mkconfig
from the file
/etc/default/grub
and scripts found in the
/etc/grub.d/
directory. Therefore you should never
edit the file manually. Instead, edit the related source files or use
the YaST module to modify the
configuration as described in Section 12.3, “Configuring the Boot Loader with YaST”.
/etc/default/grub
#More general options of GRUB 2 belong here, such as the time the menu is displayed, or the default OS to boot. To list all available options, see the output of the following command:
grep "export GRUB_DEFAULT" -A50 /usr/sbin/grub2-mkconfig | grep GRUB_
In addition to already defined variables, the user may introduce their
own variables, and use them later in the scripts found in the
/etc/grub.d
directory.
After having edited /etc/default/grub
, run
grub2-mkconfig
to update the main configuration
file.
All options set in this file are general options that affect all boot entries. Specific options for Xen Kernels or the Xen hypervisor can be set via the GRUB_*_XEN_* configuration options. See below for details.
GRUB_DEFAULT
Sets the boot menu entry that is booted by default. Its value can be a numeric value, the complete name of a menu entry, or “saved”.
GRUB_DEFAULT=2
boots the third (counted from zero)
boot menu entry.
GRUB_DEFAULT="2>0"
boots the first submenu
entry of the third top-level menu entry.
GRUB_DEFAULT="Example boot menu entry"
boots the
menu entry with the title “Example boot menu entry”.
GRUB_DEFAULT=saved
boots the entry specified by
the grub2-reboot
or grub2-set-default
commands. While grub2-reboot
sets the
default boot entry for the next reboot only,
grub2-set-default
sets the default boot entry
until changed.
GRUB_HIDDEN_TIMEOUT
Waits the specified number of seconds for the user to press a key.
During the period no menu is shown unless the user presses a key. If
no key is pressed during the time specified, the control is passed to
GRUB_TIMEOUT
.
GRUB_HIDDEN_TIMEOUT=0
first checks whether
Shift is pressed and shows the boot menu if yes,
otherwise immediately boots the default menu entry. This is the
default when only one bootable OS is identified by GRUB 2.
GRUB_HIDDEN_TIMEOUT_QUIET
If false
is specified, a countdown timer is
displayed on a blank screen when the
GRUB_HIDDEN_TIMEOUT
feature is active.
GRUB_TIMEOUT
Time period in seconds the boot menu is displayed before
automatically booting the default boot entry. If you press a key, the
timeout is cancelled and GRUB 2 waits for you to make the
selection manually. GRUB_TIMEOUT=-1
will cause the
menu to be displayed until you select the boot entry manually.
GRUB_CMDLINE_LINUX
Entries on this line are added at the end of the boot entries for normal and recovery mode. Use it to add kernel parameters to the boot entry.
GRUB_CMDLINE_LINUX_DEFAULT
Same as GRUB_CMDLINE_LINUX
but the entries are
appended in the normal mode only.
GRUB_CMDLINE_LINUX_RECOVERY
Same as GRUB_CMDLINE_LINUX
but the entries are
appended in the recovery mode only.
GRUB_CMDLINE_LINUX_XEN_REPLACE
This entry will completely replace the
GRUB_CMDLINE_LINUX
parameters for all Xen
boot entries.
GRUB_CMDLINE_LINUX_XEN_REPLACE_DEFAULT
Same as GRUB_CMDLINE_LINUX_XEN_REPLACE
but it will
only replace parameters
ofGRUB_CMDLINE_LINUX_DEFAULT
.
GRUB_CMDLINE_XEN
This entry specifies the kernel parameters for the Xen guest
kernel only—the operation principle is the same as for
GRUB_CMDLINE_LINUX
.
GRUB_CMDLINE_XEN_DEFAULT
Same as GRUB_CMDLINE_XEN
—the operation
principle is the same as for
GRUB_CMDLINE_LINUX_DEFAULT
.
GRUB_TERMINAL
Enables and specifies an input/output terminal device. Can be
console
(PC BIOS and EFI consoles),
serial
(serial terminal),
ofconsole
(Open Firmware console), or the default
gfxterm
(graphics-mode output). It is also
possible to enable more than one device by quoting the required
options, for example GRUB_TERMINAL="console
serial"
.
GRUB_GFXMODE
The resolution used for the gfxterm
graphical
terminal. Note that you can only use modes supported by your graphics
card (VBE). The default is ‘auto’, which tries to select a
preferred resolution. You can display the screen resolutions
available to GRUB 2 by typing vbeinfo
in the
GRUB 2 command line. The command line is accessed by typing
C when the GRUB 2 boot menu screen is displayed.
You can also specify a color depth by appending it to the resolution
setting, for example GRUB_GFXMODE=1280x1024x24
.
GRUB_BACKGROUND
Set a background image for the gfxterm
graphical
terminal. The image must be a file readable by GRUB 2 at boot
time, and it must end with the .png
,
.tga
, .jpg
, or
.jpeg
suffix. If necessary, the image will be
scaled to fit the screen.
GRUB_DISABLE_OS_PROBER
If this option is set to true
, automatic searching
for other operating systems is disabled. Only the kernel images in
/boot/
and the options from your own scripts in
/etc/grub.d/
are detected.
SUSE_BTRFS_SNAPSHOT_BOOTING
If this option is set to true
, GRUB 2 can boot
directly into Snapper snapshots. For more information read
Section 3.3, “System Rollback by Booting from Snapshots”.
All *_DEFAULT
parameters can be handled manually or
by YaST.
For a complete list of options, see the GNU GRUB manual. For a complete list of possible parameters, see http://en.opensuse.org/Linuxrc.
/etc/grub.d
#
The scripts in this directory are read during execution of the
grub2-mkconfig
command, and their instructions are
incorporated into /boot/grub2/grub.cfg
. The order
of menu items in grub.cfg
is determined by the
order in which the files in this directory are run. Files with a leading
numeral are executed first, beginning with the lowest number.
00_header
is run before
10_linux
, which would run before
40_custom
. If files with alphabetic names are
present, they are executed after the numerically-named files. Only
executable files generate output to grub.cfg
during
execution of grub2-mkconfig
. By default all files in
the /etc/grub.d
directory are executable. The most
important scripts are:
00_header
Sets environmental variables such as system file locations, display
settings, themes, and previously saved entries. It also imports
preferences stored in the /etc/default/grub
.
Normally you do not need to make changes to this file.
10_linux
Identifies Linux kernels on the root device and creates relevant menu entries. This includes the associated recovery mode option if enabled. Only the latest kernel is displayed on the main menu page, with additional kernels included in a submenu.
30_os-prober
This script uses OS-prober
to search for Linux and
other operating systems and places the results in the GRUB 2 menu.
There are sections to identify specific other operating systems, such
as Windows or OS X.
40_custom
This file provides a simple way to include custom boot entries into
grub.cfg
. Make sure that you do not change the
exec tail -n +3 $0
part at the beginning.
90_persistent
This is a special script that copies a corresponding part of the
grub.cfg
file and outputs it back unchanged.
This way you can modify that part of grub.cfg
directly and the change survives the execution of
grub2-mkconfig
.
The processing sequence is set by the preceding numbers with the lowest number being executed first. If scripts are preceded by the same number the alphabetical order of the complete name decides the order.
In GRUB Legacy, the device.map
configuration file
was used to derive Linux device names from BIOS drive numbers. The
mapping between BIOS drives and Linux devices cannot always be guessed
correctly. For example, GRUB Legacy would get a wrong order if the boot
sequence of IDE and SCSI drives is exchanged in the BIOS configuration.
GRUB 2 avoids this problem by using device ID strings (UUIDs) or file
system labels when generating grub.cfg
. GRUB 2
utilities create a temporary device map on the fly, which is usually
sufficient, particularly in the case of single-disk systems.
However, if you need to override the GRUB 2's automatic device
mapping mechanism, create your custom mapping file
/boot/grub2/device.map
. The following example
changes the mapping to make DISK 3
the boot disk.
Note that GRUB 2 partition numbers start with 1
and not with 0
as in GRUB Legacy.
(hd1) /dev/disk-by-id/DISK3 ID (hd2) /dev/disk-by-id/DISK1 ID (hd3) /dev/disk-by-id/DISK2 ID
Even before the operating system is booted, GRUB 2 enables access to file systems. Users without root permissions can access files in your Linux system to which they have no access after the system is booted. To block this kind of access or to prevent users from booting certain menu entries, set a boot password.
If set, the boot password is required on every boot, which means the system does not boot automatically.
Proceed as follows to set a boot password. Alternatively use YaST ().
Encrypt the password using grub2-mkpasswd-pbkdf2:
tux >
sudo grub2-mkpasswd-pbkdf2
Password: ****
Reenter password: ****
PBKDF2 hash of your password is grub.pbkdf2.sha512.10000.9CA4611006FE96BC77A...
Paste the resulting string into the file
/etc/grub.d/40_custom
together with the
set superusers
command.
set superusers="root" password_pbkdf2 root grub.pbkdf2.sha512.10000.9CA4611006FE96BC77A...
Run grub2-mkconfig
to import the changes into
the main configuration file.
After you reboot, you will be prompted for a user name and a password
when trying to boot a menu entry. Enter root
and
the password you typed during the
grub2-mkpasswd-pbkdf2
command. If the credentials
are correct, the system will boot the selected boot entry.
The easiest way to configure general options of the boot loader in your openSUSE Leap system is to use the YaST module. In the , select › . The module shows the current boot loader configuration of your system and allows you to make changes.
Use the
tab to view and change settings related to type, location and advanced loader settings. You can choose whether to use GRUB 2 in standard or EFI mode.If you have an EFI system you can only install GRUB2-EFI, otherwise your system is no longer bootable.
To reinstall the boot loader, make sure to change a setting in YaST and then change it back. For example, to reinstall GRUB2-EFI, select
first and then immediately switch back to .Otherwise, the boot loader may only be partially reinstalled.
To use a boot loader other than the ones listed, select
. Read the documentation of your boot loader carefully before choosing this option.To modify the location of the boot loader, follow these steps:
Select the
tab and then choose one of the following options for :This installs the boot loader in the MBR of the first disk (according to the boot sequence preset in the BIOS).
This installs the boot loader in the boot sector of the
/
partition (this is the default).
Use this option to specify the location of the boot loader manually.
Click
to apply your changes.If your computer has more than one hard disk, you can specify the boot sequence of the disks. For more information look at Section 12.2.4, “Mapping between BIOS Drives and Linux Devices”.
Open the
tab.Click
.If more than one disk is listed, select a disk and click
or to reorder the displayed disks.Click
two times to save the changes.Advanced boot options can be configured via the
tab.Change the value of
by typing in a new value and clicking the appropriate arrow key with your mouse.When selected, the boot loader searches for other systems like Windows or other Linux installations.
Hides the boot menu and boots the default entry.
Select the desired entry from the “Default Boot Section” list. Note that the “>” sign in the boot entry name delimits the boot section and its subsection.
Protects the boot loader and the system with an additional password. For more details see Section 12.2.6, “Setting a Boot Password”.
The VGA Mode option specifies the default screen resolution during the boot process.
The optional kernel parameters are added at the end of the default parameters. For a list of all possible parameters, see http://en.opensuse.org/Linuxrc.
When checked, the boot menu appears on a graphical splash screen rather than in a text mode. The resolution of the boot screen can be then set from the
list, and graphical theme definition file can be specified with the file-chooser.
If your machine is controlled via a serial console, activate this
option and specify which COM port to use at which speed. See
info grub
or
http://www.gnu.org/software/grub/manual/grub.html#Serial-terminal
On 3215 and 3270 terminals there are some differences and limitations on how to move the cursor and how to issue editing commands within GRUB 2.
Interactivity is strongly limited. Typing often does not result in visual feedback. To see where the cursor is, type an underscore (_).
The 3270 terminal is much better at displaying and refreshing screens than the 3215 terminal.
“Traditional” cursor movement is not possible. Alt, Meta, Ctrl and the cursor keys do not work. To move the cursor, use the key combinations listed in Section 12.4.2, “Key Combinations”.
The caret (^) is used as a control character. To type a literal ^ followed by a letter, type ^, ^, LETTER.
The Enter key does not work, use ^–J instead.
Common Substitutes: |
^–J |
engage (“Enter”) |
^–L |
abort, return to previous “state” | |
^–I |
tab completion (in edit and shell mode) | |
Keys Available in Menu Mode: |
^–A |
first entry |
^–E |
last entry | |
^–P |
previous entry | |
^–N |
next entry | |
^–G |
previous page | |
^–C |
next page | |
^–F |
boot selected entry or enter submenu (same as ^–J) | |
E |
edit selected entry | |
C |
enter GRUB-Shell | |
Keys Available in Edit Mode: |
^–P |
previous line |
^–N |
next line | |
^–B |
backward char | |
^–F |
forward char | |
^–A |
beginning of line | |
^–E |
end of line | |
^–H |
backspace | |
^–D |
delete | |
^–K |
kill line | |
^–Y |
yank | |
^–O |
open line | |
^–L |
refresh screen | |
^–X |
boot entry | |
^–C |
enter GRUB-Shell | |
Keys Available in Command Line Mode: |
^–P |
previous command |
^–N |
next command from history | |
^–A |
beginning of line | |
^–E |
end of line | |
^–B |
backward char | |
^–F |
forward char | |
^–H |
backspace | |
^–D |
delete | |
^–K |
kill line | |
^–U |
discard line | |
^–Y |
yank |
grub2-mkconfig
Generates a new /boot/grub2/grub.cfg
based on
/etc/default/grub
and the scripts from
/etc/grub.d/
.
grub2-mkconfig -o /boot/grub2/grub.cfg
Running grub2-mkconfig
without any parameters
prints the configuration to STDOUT where it can be reviewed. Use
grub2-script-check
after
/boot/grub2/grub.cfg
has been written to check
its syntax.
grub2-mkconfig
Can Not Repair UEFI Secure Boot TablesIf you are using UEFI Secure Boot and your system is not reaching GRUB 2 correctly any more, you may need to additionally reinstall Shim and regenerate the UEFI boot table. To do so, use:
root #
shim-install --config-file=/boot/grub2/grub.cfg
grub2-mkrescue
Creates a bootable rescue image of your installed GRUB 2 configuration.
grub2-mkrescue -o save_path/name.iso iso
grub2-script-check
Checks the given file for syntax errors.
grub2-check-config /boot/grub2/grub.cfg
grub2-once
Set the default boot entry for the next boot only. To get the list of
available boot entries use the --list
option.
grub2-once number_of_the_boot_entry
grub2-once
HelpCall the program without any option to get a full list of all possible options.
Extensive information about GRUB 2 is available at
http://www.gnu.org/software/grub/. Also refer to the
grub
info page. You can also search for the keyword
“GRUB 2” in the Technical Information Search at
http://www.suse.com/support to get information about
special issues.
Linux offers the necessary networking tools and features for integration into all types of network structures. Network access using a network card can be configured with YaST. Manual configuration is also possible. In this chapter only the fundamental mechanisms and the relevant network configuration files are covered.
Linux and other Unix operating systems use the TCP/IP protocol. It is not a single network protocol, but a family of network protocols that offer various services. The protocols listed in Several Protocols in the TCP/IP Protocol Family, are provided for exchanging data between two machines via TCP/IP. Networks combined by TCP/IP, comprising a worldwide network, are also called “the Internet.”
RFC stands for Request for Comments. RFCs are documents that describe various Internet protocols and implementation procedures for the operating system and its applications. The RFC documents describe the setup of Internet protocols. For more information about RFCs, see http://www.ietf.org/rfc.html.
Transmission Control Protocol: a connection-oriented secure protocol. The data to transmit is first sent by the application as a stream of data and converted into the appropriate format by the operating system. The data arrives at the respective application on the destination host in the original data stream format it was initially sent. TCP determines whether any data has been lost or jumbled during the transmission. TCP is implemented wherever the data sequence matters.
User Datagram Protocol: a connectionless, insecure protocol. The data to transmit is sent in the form of packets generated by the application. The order in which the data arrives at the recipient is not guaranteed and data loss is possible. UDP is suitable for record-oriented applications. It features a smaller latency period than TCP.
Internet Control Message Protocol: Essentially, this is not a protocol for the end user, but a special control protocol that issues error reports and can control the behavior of machines participating in TCP/IP data transfer. In addition, it provides a special echo mode that can be viewed using the program ping.
Internet Group Management Protocol: This protocol controls machine behavior when implementing IP multicast.
As shown in Figure 13.1, “Simplified Layer Model for TCP/IP”, data exchange takes place in different layers. The actual network layer is the insecure data transfer via IP (Internet protocol). On top of IP, TCP (transmission control protocol) guarantees, to a certain extent, security of the data transfer. The IP layer is supported by the underlying hardware-dependent protocol, such as Ethernet.
The diagram provides one or two examples for each layer. The layers are ordered according to abstraction levels. The lowest layer is very close to the hardware. The uppermost layer, however, is almost a complete abstraction from the hardware. Every layer has its own special function. The special functions of each layer are mostly implicit in their description. The data link and physical layers represent the physical network used, such as Ethernet.
Almost all hardware protocols work on a packet-oriented basis. The data to transmit is collected into packets (it cannot be sent all at once). The maximum size of a TCP/IP packet is approximately 64 KB. Packets are normally quite smaller, as the network hardware can be a limiting factor. The maximum size of a data packet on an Ethernet is about fifteen hundred bytes. The size of a TCP/IP packet is limited to this amount when the data is sent over an Ethernet. If more data is transferred, more data packets need to be sent by the operating system.
For the layers to serve their designated functions, additional information regarding each layer must be saved in the data packet. This takes place in the header of the packet. Every layer attaches a small block of data, called the protocol header, to the front of each emerging packet. A sample TCP/IP data packet traveling over an Ethernet cable is illustrated in Figure 13.2, “TCP/IP Ethernet Packet”. The proof sum is located at the end of the packet, not at the beginning. This simplifies things for the network hardware.
When an application sends data over the network, the data passes through each layer, all implemented in the Linux Kernel except the physical layer. Each layer is responsible for preparing the data so it can be passed to the next layer. The lowest layer is ultimately responsible for sending the data. The entire procedure is reversed when data is received. Like the layers of an onion, in each layer the protocol headers are removed from the transported data. Finally, the transport layer is responsible for making the data available for use by the applications at the destination. In this manner, one layer only communicates with the layer directly above or below it. For applications, it is irrelevant whether data is transmitted via a 100 Mbit/s FDDI network or via a 56-Kbit/s modem line. Likewise, it is irrelevant for the data line which kind of data is transmitted, as long as packets are in the correct format.
The discussion in this section is limited to IPv4 networks. For information about IPv6 protocol, the successor to IPv4, refer to Section 13.2, “IPv6—The Next Generation Internet”.
Every computer on the Internet has a unique 32-bit address. These 32 bits (or 4 bytes) are normally written as illustrated in the second row in Example 13.1, “Writing IP Addresses”.
IP Address (binary): 11000000 10101000 00000000 00010100 IP Address (decimal): 192. 168. 0. 20
In decimal form, the four bytes are written in the decimal number system, separated by periods. The IP address is assigned to a host or a network interface. It can be used only once throughout the world. There are exceptions to this rule, but these are not relevant to the following passages.
The points in IP addresses indicate the hierarchical system. Until the 1990s, IP addresses were strictly categorized in classes. However, this system proved too inflexible and was discontinued. Now, classless routing (CIDR, classless interdomain routing) is used.
Netmasks are used to define the address range of a subnet. If two hosts are in the same subnet, they can reach each other directly. If they are not in the same subnet, they need the address of a gateway that handles all the traffic for the subnet. To check if two IP addresses are in the same subnet, simply “AND” both addresses with the netmask. If the result is identical, both IP addresses are in the same local network. If there are differences, the remote IP address, and thus the remote interface, can only be reached over a gateway.
To understand how the netmask works, look at
Example 13.2, “Linking IP Addresses to the Netmask”. The netmask consists of
32 bits that identify how much of an IP address belongs to the network.
All those bits that are 1
mark the corresponding bit
in the IP address as belonging to the network. All bits that are
0
mark bits inside the subnet. This means that the
more bits are 1
, the smaller the subnet is. Because
the netmask always consists of several successive 1
bits, it is also possible to count the number of bits in the netmask. In
Example 13.2, “Linking IP Addresses to the Netmask” the first net with 24 bits
could also be written as 192.168.0.0/24
.
IP address (192.168.0.20): 11000000 10101000 00000000 00010100 Netmask (255.255.255.0): 11111111 11111111 11111111 00000000 --------------------------------------------------------------- Result of the link: 11000000 10101000 00000000 00000000 In the decimal system: 192. 168. 0. 0 IP address (213.95.15.200): 11010101 10111111 00001111 11001000 Netmask (255.255.255.0): 11111111 11111111 11111111 00000000 --------------------------------------------------------------- Result of the link: 11010101 10111111 00001111 00000000 In the decimal system: 213. 95. 15. 0
To give another example: all machines connected with the same Ethernet cable are usually located in the same subnet and are directly accessible. Even when the subnet is physically divided by switches or bridges, these hosts can still be reached directly.
IP addresses outside the local subnet can only be reached if a gateway is configured for the target network. In the most common case, there is only one gateway that handles all traffic that is external. However, it is also possible to configure several gateways for different subnets.
If a gateway has been configured, all external IP packets are sent to the appropriate gateway. This gateway then attempts to forward the packets in the same manner—from host to host—until it reaches the destination host or the packet's TTL (time to live) expires.
This is the netmask AND any address in the network, as shown in
Example 13.2, “Linking IP Addresses to the Netmask” under
Result
. This address cannot be assigned to any
hosts.
This could be paraphrased as: “Access all hosts in this subnet.” To generate this, the netmask is inverted in binary form and linked to the base network address with a logical OR. The above example therefore results in 192.168.0.255. This address cannot be assigned to any hosts.
The address 127.0.0.1
is
assigned to the “loopback device” on each host. A
connection can be set up to your own machine with this address and
with all addresses from the complete
127.0.0.0/8
loopback
network as defined with IPv4. With IPv6 there is only one loopback
address (::1
).
Because IP addresses must be unique all over the world, you cannot select random addresses. There are three address domains to use if you want to set up a private IP-based network. These cannot get any connection from the rest of the Internet, because they cannot be transmitted over the Internet. These address domains are specified in RFC 1597 and listed in Table 13.1, “Private IP Address Domains”.
Network/Netmask |
Domain |
---|---|
|
|
|
|
|
|
Because of the emergence of the WWW (World Wide Web), the Internet has experienced explosive growth, with an increasing number of computers communicating via TCP/IP in the past fifteen years. Since Tim Berners-Lee at CERN (http://public.web.cern.ch) invented the WWW in 1990, the number of Internet hosts has grown from a few thousand to about a hundred million.
As mentioned, an IPv4 address consists of only 32 bits. Also, quite a few IP addresses are lost—they cannot be used because of the way in which networks are organized. The number of addresses available in your subnet is two to the power of the number of bits, minus two. A subnet has, for example, 2, 6, or 14 addresses available. To connect 128 hosts to the Internet, for example, you need a subnet with 256 IP addresses, from which only 254 are usable, because two IP addresses are needed for the structure of the subnet itself: the broadcast and the base network address.
Under the current IPv4 protocol, DHCP or NAT (network address translation) are the typical mechanisms used to circumvent the potential address shortage. Combined with the convention to keep private and public address spaces separate, these methods can certainly mitigate the shortage. The problem with them lies in their configuration, which is a chore to set up and a burden to maintain. To set up a host in an IPv4 network, you need several address items, such as the host's own IP address, the subnetmask, the gateway address and maybe a name server address. All these items need to be known and cannot be derived from somewhere else.
With IPv6, both the address shortage and the complicated configuration should be a thing of the past. The following sections tell more about the improvements and benefits brought by IPv6 and about the transition from the old protocol to the new one.
The most important and most visible improvement brought by the new protocol is the enormous expansion of the available address space. An IPv6 address is made up of 128 bit values instead of the traditional 32 bits. This provides for as many as several quadrillion IP addresses.
However, IPv6 addresses are not only different from their predecessors with regard to their length. They also have a different internal structure that may contain more specific information about the systems and the networks to which they belong. More details about this are found in Section 13.2.2, “Address Types and Structure”.
The following is a list of other advantages of the new protocol:
IPv6 makes the network “plug and play” capable, which means that a newly set up system integrates into the (local) network without any manual configuration. The new host uses its automatic configuration mechanism to derive its own address from the information made available by the neighboring routers, relying on a protocol called the neighbor discovery (ND) protocol. This method does not require any intervention on the administrator's part and there is no need to maintain a central server for address allocation—an additional advantage over IPv4, where automatic address allocation requires a DHCP server.
Nevertheless if a router is connected to a switch, the router should
send periodic advertisements with flags telling the hosts of a
network how they should interact with each other. For more
information, see RFC 2462 and the
radvd.conf(5)
man page, and RFC 3315.
IPv6 makes it possible to assign several addresses to one network interface at the same time. This allows users to access several networks easily, something that could be compared with the international roaming services offered by mobile phone companies: when you take your mobile phone abroad, the phone automatically logs in to a foreign service as soon as it enters the corresponding area, so you can be reached under the same number everywhere and are able to place an outgoing call, as you would in your home area.
With IPv4, network security is an add-on function. IPv6 includes IPsec as one of its core features, allowing systems to communicate over a secure tunnel to avoid eavesdropping by outsiders on the Internet.
Realistically, it would be impossible to switch the entire Internet from IPv4 to IPv6 at one time. Therefore, it is crucial that both protocols can coexist not only on the Internet, but also on one system. This is ensured by compatible addresses (IPv4 addresses can easily be translated into IPv6 addresses) and by using several tunnels. See Section 13.2.3, “Coexistence of IPv4 and IPv6”. Also, systems can rely on a dual stack IP technique to support both protocols at the same time, meaning that they have two network stacks that are completely separate, such that there is no interference between the two protocol versions.
With IPv4, some services, such as SMB, need to broadcast their packets to all hosts in the local network. IPv6 allows a much more fine-grained approach by enabling servers to address hosts through multicasting—by addressing several hosts as parts of a group (which is different from addressing all hosts through broadcasting or each host individually through unicasting). Which hosts are addressed as a group may depend on the concrete application. There are some predefined groups to address all name servers (the all name servers multicast group), for example, or all routers (the all routers multicast group).
As mentioned, the current IP protocol is lacking in two important aspects: there is an increasing shortage of IP addresses and configuring the network and maintaining the routing tables is becoming a more complex and burdensome task. IPv6 solves the first problem by expanding the address space to 128 bits. The second one is countered by introducing a hierarchical address structure, combined with sophisticated techniques to allocate network addresses, and multihoming (the ability to assign several addresses to one device, giving access to several networks).
When dealing with IPv6, it is useful to know about three different types of addresses:
Addresses of this type are associated with exactly one network interface. Packets with such an address are delivered to only one destination. Accordingly, unicast addresses are used to transfer packets to individual hosts on the local network or the Internet.
Addresses of this type relate to a group of network interfaces. Packets with such an address are delivered to all destinations that belong to the group. Multicast addresses are mainly used by certain network services to communicate with certain groups of hosts in a well-directed manner.
Addresses of this type are related to a group of interfaces. Packets with such an address are delivered to the member of the group that is closest to the sender, according to the principles of the underlying routing protocol. Anycast addresses are used to make it easier for hosts to find out about servers offering certain services in the given network area. All servers of the same type have the same anycast address. Whenever a host requests a service, it receives a reply from the server with the closest location, as determined by the routing protocol. If this server should fail for some reason, the protocol automatically selects the second closest server, then the third one, and so forth.
An IPv6 address is made up of eight four-digit fields, each representing
16 bits, written in hexadecimal notation. They are separated by colons
(:
). Any leading zero bytes within a given field may
be dropped, but zeros within the field or at its end may not. Another
convention is that more than four consecutive zero bytes may be
collapsed into a double colon. However, only one such
::
is allowed per address. This kind of shorthand
notation is shown in Example 13.3, “Sample IPv6 Address”, where all
three lines represent the same address.
fe80 : 0000 : 0000 : 0000 : 0000 : 10 : 1000 : 1a4 fe80 : 0 : 0 : 0 : 0 : 10 : 1000 : 1a4 fe80 : : 10 : 1000 : 1a4
Each part of an IPv6 address has a defined function. The first bytes
form the prefix and specify the type of address. The center part is the
network portion of the address, but it may be unused. The end of the
address forms the host part. With IPv6, the netmask is defined by
indicating the length of the prefix after a slash at the end of the
address. An address, as shown in Example 13.4, “IPv6 Address Specifying the Prefix Length”,
contains the information that the first 64 bits form the network part of
the address and the last 64 form its host part. In other words, the
64
means that the netmask is filled with 64 1-bit
values from the left. As with IPv4, the IP address is combined with AND
with the values from the netmask to determine whether the host is
located in the same subnet or in another one.
fe80::10:1000:1a4/64
IPv6 knows about several predefined types of prefixes. Some are shown in Various IPv6 Prefixes.
00
IPv4 addresses and IPv4 over IPv6 compatibility addresses. These are used to maintain compatibility with IPv4. Their use still requires a router able to translate IPv6 packets into IPv4 packets. Several special addresses, such as the one for the loopback device, have this prefix as well.
2
or
3
as the
first digit
Aggregatable global unicast addresses. As is the case with IPv4, an
interface can be assigned to form part of a certain subnet.
Currently, there are the following address spaces:
2001::/16
(production
quality address space) and
2002::/16
(6to4 address
space).
fe80::/10
Link-local addresses. Addresses with this prefix should not be routed and should therefore only be reachable from within the same subnet.
fec0::/10
Site-local addresses. These may be routed, but only within the
network of the organization to which they belong. In effect, they are
the IPv6 equivalent of the current private network address space,
such as 10.x.x.x
.
ff
These are multicast addresses.
A unicast address consists of three basic components:
The first part (which also contains one of the prefixes mentioned above) is used to route packets through the public Internet. It includes information about the company or institution that provides the Internet access.
The second part contains routing information about the subnet to which to deliver the packet.
The third part identifies the interface to which to deliver the
packet. This also allows for the MAC to form part of the address.
Given that the MAC is a globally unique, fixed identifier coded into
the device by the hardware maker, the configuration procedure is
substantially simplified. In fact, the first 64 address bits are
consolidated to form the EUI-64
token, with the
last 48 bits taken from the MAC, and the remaining 24 bits containing
special information about the token type. This also makes it possible
to assign an EUI-64
token to interfaces that do
not have a MAC, such as those based on PPP.
On top of this basic structure, IPv6 distinguishes between five different types of unicast addresses:
::
(unspecified) This address is used by the host as its source address when the interface is initialized for the first time—when the address cannot yet be determined by other means.
::1
(loopback) The address of the loopback device.
The IPv6 address is formed by the IPv4 address and a prefix consisting of 96 zero bits. This type of compatibility address is used for tunneling (see Section 13.2.3, “Coexistence of IPv4 and IPv6”) to allow IPv4 and IPv6 hosts to communicate with others operating in a pure IPv4 environment.
This type of address specifies a pure IPv4 address in IPv6 notation.
There are two address types for local use:
This type of address can only be used in the local subnet. Packets
with a source or target address of this type should not be routed
to the Internet or other subnets. These addresses contain a
special prefix
(fe80::/10
) and the
interface ID of the network card, with the middle part consisting
of zero bytes. Addresses of this type are used during automatic
configuration to communicate with other hosts belonging to the
same subnet.
Packets with this type of address may be routed to other subnets,
but not to the wider Internet—they must remain inside the
organization's own network. Such addresses are used for intranets
and are an equivalent of the private address space defined by
IPv4. They contain a special prefix
(fec0::/10
), the
interface ID, and a 16 bit field specifying the subnet ID. Again,
the rest is filled with zero bytes.
As a completely new feature introduced with IPv6, each network interface normally gets several IP addresses, with the advantage that several networks can be accessed through the same interface. One of these networks can be configured completely automatically using the MAC and a known prefix with the result that all hosts on the local network can be reached as soon as IPv6 is enabled (using the link-local address). With the MAC forming part of it, any IP address used in the world is unique. The only variable parts of the address are those specifying the site topology and the public topology, depending on the actual network in which the host is currently operating.
For a host to go back and forth between different networks, it needs at least two addresses. One of them, the home address, not only contains the interface ID but also an identifier of the home network to which it normally belongs (and the corresponding prefix). The home address is a static address and, as such, it does not normally change. Still, all packets destined to the mobile host can be delivered to it, regardless of whether it operates in the home network or somewhere outside. This is made possible by the completely new features introduced with IPv6, such as stateless autoconfiguration and neighbor discovery. In addition to its home address, a mobile host gets one or more additional addresses that belong to the foreign networks where it is roaming. These are called care-of addresses. The home network has a facility that forwards any packets destined to the host when it is roaming outside. In an IPv6 environment, this task is performed by the home agent, which takes all packets destined to the home address and relays them through a tunnel. On the other hand, those packets destined to the care-of address are directly transferred to the mobile host without any special detours.
The migration of all hosts connected to the Internet from IPv4 to IPv6 is a gradual process. Both protocols will coexist for some time to come. The coexistence on one system is guaranteed where there is a dual stack implementation of both protocols. That still leaves the question of how an IPv6 enabled host should communicate with an IPv4 host and how IPv6 packets should be transported by the current networks, which are predominantly IPv4 based. The best solutions offer tunneling and compatibility addresses (see Section 13.2.2, “Address Types and Structure”).
IPv6 hosts that are more or less isolated in the (worldwide) IPv4 network can communicate through tunnels: IPv6 packets are encapsulated as IPv4 packets to move them across an IPv4 network. Such a connection between two IPv4 hosts is called a tunnel. To achieve this, packets must include the IPv6 destination address (or the corresponding prefix) and the IPv4 address of the remote host at the receiving end of the tunnel. A basic tunnel can be configured manually according to an agreement between the hosts' administrators. This is also called static tunneling.
However, the configuration and maintenance of static tunnels is often too labor-intensive to use them for daily communication needs. Therefore, IPv6 provides for three different methods of dynamic tunneling:
IPv6 packets are automatically encapsulated as IPv4 packets and sent over an IPv4 network capable of multicasting. IPv6 is tricked into seeing the whole network (Internet) as a huge local area network (LAN). This makes it possible to determine the receiving end of the IPv4 tunnel automatically. However, this method does not scale very well and is also hampered by the fact that IP multicasting is far from widespread on the Internet. Therefore, it only provides a solution for smaller corporate or institutional networks where multicasting can be enabled. The specifications for this method are laid down in RFC 2529.
With this method, IPv4 addresses are automatically generated from IPv6 addresses, enabling isolated IPv6 hosts to communicate over an IPv4 network. However, several problems have been reported regarding the communication between those isolated IPv6 hosts and the Internet. The method is described in RFC 3056.
This method relies on special servers that provide dedicated tunnels for IPv6 hosts. It is described in RFC 3053.
To configure IPv6, you normally do not need to make any changes on the
individual workstations. IPv6 is enabled by default. To disable or
enable IPv6 on an installed system, use the YaST modprobe
-i ipv6
as
root
. It is impossible to
unload the IPv6 module after is has been loaded.
Because of the autoconfiguration concept of IPv6, the network card is assigned an address in the link-local network. Normally, no routing table management takes place on a workstation. The network routers can be queried by the workstation, using the router advertisement protocol, for what prefix and gateways should be implemented. The radvd program can be used to set up an IPv6 router. This program informs the workstations which prefix to use for the IPv6 addresses and which routers. Alternatively, use zebra/quagga for automatic configuration of both addresses and routing.
For information about how to set up various types of tunnels using the
/etc/sysconfig/network
files, see the man page of
ifcfg-tunnel
(man ifcfg-tunnel
).
The above overview does not cover the topic of IPv6 comprehensively. For a more in-depth look at the new protocol, refer to the following online documentation and books:
The starting point for everything about IPv6.
All information needed to start your own IPv6 network.
The list of IPv6-enabled products.
Here, find the Linux IPv6-HOWTO and many links related to the topic.
The fundamental RFC about IPv6.
A book describing all the important aspects of the topic is IPv6 Essentials by Silvia Hagen (ISBN 0-596-00125-8).
DNS assists in assigning an IP address to one or more names and assigning a name to an IP address. In Linux, this conversion is usually carried out by a special type of software known as bind. The machine that takes care of this conversion is called a name server. The names make up a hierarchical system in which each name component is separated by a period. The name hierarchy is, however, independent of the IP address hierarchy described above.
Consider a complete name, such as
jupiter.example.com
, written in
the format hostname.domain
.
A full name, called a fully qualified domain
name (FQDN), consists of a host name and a domain name
(example.com
). The
latter also includes the top level domain or TLD
(com
).
TLD assignment has become quite confusing for historical reasons.
Traditionally, three-letter domain names are used in the USA. In the rest
of the world, the two-letter ISO national codes are the standard. In
addition to that, longer TLDs were introduced in 2000 that represent
certain spheres of activity (for example,
.info
,
.name
,
.museum
).
In the early days of the Internet (before 1990), the file
/etc/hosts
was used to store the names of all the
machines represented over the Internet. This quickly proved to be
impractical in the face of the rapidly growing number of computers
connected to the Internet. For this reason, a decentralized database was
developed to store the host names in a widely distributed manner. This
database, similar to the name server, does not have the data pertaining
to all hosts in the Internet readily available, but can dispatch requests
to other name servers.
The top of the hierarchy is occupied by root name servers. These root name servers manage the top level domains and are run by the Network Information Center (NIC). Each root name server knows about the name servers responsible for a given top level domain. Information about top level domain NICs is available at http://www.internic.net.
DNS can do more than resolve host names. The name server also knows which host is receiving e-mails for an entire domain—the mail exchanger (MX).
For your machine to resolve an IP address, it must know about at least one name server and its IP address. Easily specify such a name server with the help of YaST. The configuration of name server access with openSUSE® Leap is described in Section 13.4.1.4, “Configuring Host Name and DNS”. Setting up your own name server is described in Chapter 19, The Domain Name System.
The protocol whois
is closely related to DNS. With
this program, quickly find out who is responsible for a given domain.
The .local
top level domain is treated as link-local
domain by the resolver. DNS requests are send as multicast DNS requests
instead of normal DNS requests. If you already use the
.local
domain in your name server configuration, you
must switch this option off in /etc/host.conf
. For
more information, see the host.conf
manual page.
If you want to switch off MDNS during installation, use
nomdns=1
as a boot parameter.
For more information on multicast DNS, see http://www.multicastdns.org.
There are many supported networking types on Linux. Most of them use different device names and the configuration files are spread over several locations in the file system. For a detailed overview of the aspects of manual network configuration, see Section 13.6, “Configuring a Network Connection Manually”.
All network interfaces with link up (with a network cable connected) are automatically configured. Additional hardware can be configured any time on the installed system. The following sections describe the network configuration for all types of network connections supported by openSUSE Leap.
To configure your Ethernet or Wi-Fi/Bluetooth card in YaST, select
› . After starting the module, YaST displays the dialog with four tabs: , , and .The Section 13.4.1.1, “Configuring Global Networking Options”.
tab allows you to set general networking options such as the network setup method, IPv6, and general DHCP options. For more information, seeThe Section 13.4.1.3, “Configuring an Undetected Network Card”. If you want to change the configuration of an already configured card, see Section 13.4.1.2, “Changing the Configuration of a Network Card”.
tab contains information about installed network interfaces and configurations. Any properly detected network card is listed with its name. You can manually configure new cards, remove or change their configuration in this dialog. If you want to manually configure a card that was not automatically detected, seeThe Section 13.4.1.4, “Configuring Host Name and DNS”.
tab allows to set the host name of the machine and name the servers to be used. For more information, seeThe Section 13.4.1.5, “Configuring Routing” for more information.
tab is used for the configuration of routing. SeeThe
tab of the YaST module allows you to set important global networking options, such as the use of NetworkManager, IPv6 and DHCP client options. These settings are applicable for all network interfaces.
In the nm-applet
should be used to configure network options and the
, and
tabs of the
module are disabled. For more information on NetworkManager,
see Chapter 28, Using NetworkManager.
In the
choose whether to use the IPv6 protocol. It is possible to use IPv6 together with IPv4. By default, IPv6 is enabled. However, in networks not using IPv6 protocol, response times can be faster with IPv6 protocol disabled. To disable IPv6, deactivate . If IPv6 is disabled, the Kernel no longer loads the IPv6 module automatically. This setting will be applied after reboot.In the
configure options for the DHCP client. The must be different for each DHCP client on a single network. If left empty, it defaults to the hardware address of the network interface. However, if you are running several virtual machines using the same network interface and, therefore, the same hardware address, specify a unique free-form identifier here.
The AUTO
to send the current host name (that is the one
defined in /etc/HOSTNAME
). Make the option field
empty for not sending any host name.
If you do not want to change the default route according to the information from DHCP, deactivate
.To change the configuration of a network card, select a card from the list of the detected cards in
› in YaST and click . The dialog appears in which to adjust the card configuration using the , and tabs.You can set the IP address of the network card or the way its IP address is determined in the
tab of the dialog. Both IPv4 and IPv6 addresses are supported. The network card can have (which is useful for bonding devices), a (IPv4 or IPv6) or a assigned via or or both.If using
, select whether to use (for IPv4), (for IPv6) or .If possible, the first network card with link that is available during the installation is automatically configured to use automatic address setup via DHCP.
DHCP should also be used if you are using a DSL line but with no static IP assigned by the ISP (Internet Service Provider). If you decide to use DHCP, configure the details in
in the tab of the dialog of the YaST network card configuration module. If you have a virtual host setup where different hosts communicate through the same interface, an is necessary to distinguish them.DHCP is a good choice for client configuration but it is not ideal for server configuration. To set a static IP address, proceed as follows:
Select a card from the list of detected cards in the
tab of the YaST network card configuration module and click .In the
tab, choose .
Enter the /64
.
Optionally, you can enter a fully qualified
/etc/hosts
configuration file.
Click
.To activate the configuration, click
.If you use the static address, the name servers and default gateway are not configured automatically. To configure name servers, proceed as described in Section 13.4.1.4, “Configuring Host Name and DNS”. To configure a gateway, proceed as described in Section 13.4.1.5, “Configuring Routing”.
One network device can have multiple IP addresses.
These so-called aliases or labels, respectively, work with IPv4 only.
With IPv6 they will be ignored. Using iproute2
network interfaces can have one or more addresses.
Using YaST to set additional addresses for your network card, proceed as follows:
Select a card from the list of detected cards in the
tab of the YaST dialog and click .In the
› tab, click .Enter
, , and . Do not include the interface name in the alias name.To activate the configuration, confirm the settings.
It is possible to change the device name of the network card when it is used. It is also possible to determine whether the network card should be identified by udev via its hardware (MAC) address or via the bus ID. The later option is preferable in large servers to simplify hotplugging of cards. To set these options with YaST, proceed as follows:
Select a card from the list of detected cards in the
tab of the YaST dialog and click .Go to the
tab. The current device name is shown in . Click .Select whether udev should identify the card by its
or . The current MAC address and bus ID of the card are shown in the dialog.To change the device name, check the
option and edit the name.To activate the configuration, confirm the settings.
For some network cards, several Kernel drivers may be available. If the card is already configured, YaST allows you to select a Kernel driver to be used from a list of available suitable drivers. It is also possible to specify options for the Kernel driver. To set these options with YaST, proceed as follows:
Select a card from the list of detected cards in the
tab of the YaST Network Settings module and click .Go to the
tab.
Select the Kernel driver to be used in =
=value. If more
options are used, they should be space-separated.
To activate the configuration, confirm the settings.
If you use the method with wicked
, you can configure
your device to either start during boot, on cable connection, on card
detection, manually, or never. To change device start-up, proceed as
follows:
In YaST select a card from the list of detected cards in
› and click .In the
tab, select the desired entry from .
Choose ifup
. Choose
to not start the device at all. The is
similar to , but the interface does not
shut down with the systemctl stop network
command;
the network
service also cares about the
wicked
service if wicked
is
active. Use this if you use an NFS or iSCSI root file system.
To activate the configuration, confirm the settings.
On (diskless) systems where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.
When shutting down or rebooting the system, the default processing order is to turn off network connections, then unmount the root partition. With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already not activated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Section 13.4.1.2.5, “Activating the Network Device”, and choose in the pane.
You can set a maximum transmission unit (MTU) for the interface. MTU refers to the largest allowed packet size in bytes. A higher MTU brings higher bandwidth efficiency. However, large packets can block up a slow interface for some time, increasing the lag for further packets.
In YaST select a card from the list of detected cards in
› and click .In the
tab, select the desired entry from the list.To activate the configuration, confirm the settings.
In YaST select the InfiniBand device in
› and click .In the
tab, select one of the (IPoIB) modes: (default) or .To activate the configuration, confirm the settings.
For more information about InfiniBand, see
/usr/src/linux/Documentation/infiniband/ipoib.txt
.
Without having to enter the detailed firewall setup as described in Book “Security Guide”, Chapter 15 “Masquerading and Firewalls”, Section 15.4.1 “Configuring the Firewall with YaST”, you can determine the basic firewall setup for your device as part of the device setup. Proceed as follows:
Open the YaST
› module. In the tab, select a card from the list of detected cards and click .Enter the
tab of the dialog.Determine the
to which your interface should be assigned. The following options are available:This option is available only if the firewall is disabled and the firewall does not run. Only use this option if your machine is part of a greater network that is protected by an outer firewall.
This option is available only if the firewall is enabled. The
firewall is running and the interface is automatically assigned to
a firewall zone. The zone which contains the keyword
any
or the external zone will be used for such
an interface.
The firewall is running, but does not enforce any rules to protect this interface. Use this option if your machine is part of a greater network that is protected by an outer firewall. It is also useful for the interfaces connected to the internal network, when the machine has more network interfaces.
A demilitarized zone is an additional line of defense in front of an internal network and the (hostile) Internet. Hosts assigned to this zone can be reached from the internal network and from the Internet, but cannot access the internal network.
The firewall is running on this interface and fully protects it against other—presumably hostile—network traffic. This is the default option.
To activate the configuration, confirm the settings.
If a network card is not detected correctly, the card is not included in the list of detected cards. If you are sure that your system includes a driver for your card, you can configure it manually. You can also configure special network device types, such as bridge, bond, TUN or TAP. To configure an undetected network card (or a special device) proceed as follows:
In the
› › dialog in YaST click .In the
dialog, set the of the interface from the available options and . If the network card is a PCMCIA or USB device, activate the respective check box and exit this dialog with . Otherwise, you can define the Kernel to be used for the card and its , if necessary.
In ethtool
options used by ifup
for
the interface. See the ethtool
manual page for
available options. If the option string starts with a
-
(for example -K
interface_name rx on
), the second
word in the string is replaced with the current interface name.
Otherwise (for example autoneg off speed 10
)
ifup
prepends -s
interface_name
.
Click
.Configure any needed options, such as the IP address, device activation or firewall zone for the interface in the Section 13.4.1.2, “Changing the Configuration of a Network Card”.
, , and tabs. For more information about the configuration options, seeIf you selected
as the device type of the interface, configure the wireless connection in the next dialog.To activate the new network configuration, confirm the settings.
If you did not change the network configuration during installation and the Ethernet card was already available, a host name was automatically generated for your computer and DHCP was activated. The same applies to the name service information your host needs to integrate into a network environment. If DHCP is used for network address setup, the list of domain name servers is automatically filled with the appropriate data. If a static setup is preferred, set these values manually.
To change the name of your computer and adjust the name server search list, proceed as follows:
Go to the
› tab in the module in YaST.Enter the
and, if needed, the . The domain is especially important if the machine is a mail server. Note that the host name is global and applies to all set network interfaces.If you are using DHCP to get an IP address, the host name of your computer will be automatically set by the DHCP. You should disable this behavior if you connect to different networks, because they may assign different host names and changing the host name at runtime may confuse the graphical desktop. To disable using DHCP to get an IP address deactivate
.
127.0.0.2
(loopback) IP address in
/etc/hosts
. This is a useful option if you want
to have the host name resolvable at all times, even without active
network.
In /etc/resolv.conf
file) is modified.
If the netconfig
script
which merges the data defined statically (with YaST or in the
configuration files) with data obtained dynamically (from the DHCP
client or NetworkManager). This default policy is sufficient in most
cases.
If the netconfig
is not allowed to modify the
/etc/resolv.conf
file. However, this file can be
edited manually.
If the eth*
ppp?
will first target all eth and then all ppp0-ppp9
interfaces. There are two special policy values that indicate how to
apply the static settings defined in the
/etc/sysconfig/network/config
file:
STATIC
The static settings need to be merged together with the dynamic settings.
STATIC_FALLBACK
The static settings are used only when no dynamic configuration is available.
For more information, see the man page of
netconfig
(8) (man 8 netconfig
).
Enter the
and fill in the list. Name servers must be specified by IP addresses, such as 192.168.1.116, not by host names. Names specified in the tab are domain names used for resolving host names without a specified domain. If more than one is used, separate domains with commas or white space.To activate the configuration, confirm the settings.
It is also possible to edit the host name using YaST from the
command line. The changes made by YaST take effect immediately
(which is not the case when editing the
/etc/HOSTNAME
file manually). To change the host
name, use the following command:
yast dns edit hostname=hostname
To change the name servers, use the following commands:
yast dns edit nameserver1=192.168.1.116 yast dns edit nameserver2=192.168.1.117 yast dns edit nameserver3=192.168.1.118
To make your machine communicate with other machines and other networks, routing information must be given to make network traffic take the correct path. If DHCP is used, this information is automatically provided. If a static setup is used, this data must be added manually.
In YaST go to
› .Enter the IP address of the
(IPv4 and IPv6 if necessary). The default gateway matches every possible destination, but if a routing table entry exists that matches the required address, this will be used instead of the default route via the Default Gateway.
More entries can be entered in the -
.
To enter a default gateway into the table, use
default
in the
field.
If more default routes are used, it is possible to specify the metric
option to determine which route has a higher priority. To specify the
metric option, enter - metric
number
in
. The route with the highest metric is used
as default. If the network device is disconnected, its route will be
removed and the next one will be used.
However, the current Kernel does not use metric in static routing,
only routing daemons like multipathd
do.
If the system is a router, enable
and in the as needed.To activate the configuration, confirm the settings.
NetworkManager is the ideal solution for laptops and other portable computers. With NetworkManager, you do not need to worry about configuring network interfaces and switching between networks when you are moving.
wicked
#
However, NetworkManager is not a suitable solution for all cases, so you can
still choose between the wicked
controlled method for
managing network connections and NetworkManager. If you want to manage your
network connection with NetworkManager, enable NetworkManager in the YaST Network
Settings module as described in Section 28.2, “Enabling or Disabling NetworkManager” and
configure your network connections with NetworkManager. For a list of use cases
and a detailed description of how to configure and use NetworkManager, refer to
Chapter 28, Using NetworkManager.
Some differences between wicked and NetworkManager:
root
Privileges
If you use NetworkManager for network setup, you can easily switch, stop or
start your network connection at any time from within your desktop
environment using an applet. NetworkManager also makes it possible to change
and configure wireless card connections without requiring
root
privileges. For this reason, NetworkManager is the ideal
solution for a mobile workstation.
wicked
also provides some ways to switch, stop or
start the connection with or without user intervention, like
user-managed devices. However, this always requires root
privileges to change or configure a network device. This is often a
problem for mobile computing, where it is not possible to preconfigure
all the connection possibilities.
Both wicked
and NetworkManager can handle network
connections with a wireless network (with WEP, WPA-PSK, and
WPA-Enterprise access) and wired networks using DHCP and static
configuration. They also support connection through dial-up and VPN.
With NetworkManager you can also connect a mobile broadband (3G) modem
or set up a DSL connection, which is not possible with the traditional
configuration.
NetworkManager tries to keep your computer connected at all times using the
best connection available. If the network cable is accidentally
disconnected, it tries to reconnect. It can find the network with the
best signal strength from the list of your wireless connections and
automatically use it to connect. To get the same functionality with
wicked
, more configuration effort is required.
The individual network connection settings created with NetworkManager are
stored in configuration profiles. The system
connections configured with either NetworkManager or YaST are saved in
/etc/networkmanager/system-connections/*
or in
/etc/sysconfig/network/ifcfg-*
. For GNOME, all
user-defined connections are stored in GConf.
In case no profile is configured, NetworkManager automatically creates one and
names it Auto $INTERFACE-NAME
. That is made in an
attempt to work without any configuration for as many cases as (securely)
possible. If the automatically created profiles do not suit your needs,
use the network connection configuration dialogs provided by GNOME to
modify them as desired. For more information, see
Section 28.3, “Configuring Network Connections”.
On centrally administered machines, certain NetworkManager features can be controlled or disabled with PolKit, for example if a user is allowed to modify administrator defined connections or if a user is allowed to define his own network configurations. To view or change the respective NetworkManager policies, start the graphical Book “Security Guide”, Chapter 9 “Authorization with PolKit”.
tool for PolKit. In the tree on the left side, find them below the entry. For an introduction to PolKit and details on how to use it, refer toManual configuration of the network software should be the last alternative. Using YaST is recommended. However, this background information about the network configuration can also assist your work with YaST.
wicked
Network Configuration #
The tool and library called wicked
provides a new
framework for network configuration.
One of the challenges with traditional network interface management is that different layers of network management get jumbled together into one single script, or at most two different scripts, that interact with each other in a not-really-well-defined way, with side effects that are difficult to be aware of, obscure constraints and conventions, etc. Several layers of special hacks for a variety of different scenarios increase the maintenance burden. Address configuration protocols are being used that are implemented via daemons like dhcpcd, which interact rather poorly with the rest of the infrastructure. Funky interface naming schemes that require heavy udev support are introduced to achieve persistent identification of interfaces.
The idea of wicked is to decompose the problem in several ways. None of them is entirely novel, but trying to put ideas from different projects together is hopefully going to create a better solution overall.
One approach is to use a client/server model. This allows wicked to define standardized facilities for things like address configuration that are well integrated with the overall framework. For example, with address configuration, the administrator may request that an interface should be configured via DHCP or IPv4 zeroconf, and all the address configuration service does is obtain the lease from its server, and pass it on to the wicked server process, which installs the requested addresses and routes.
The other approach to decomposing the problem is to enforce the layering aspect. For any type of network interface, it is possible to define a dbus service that configures the network interface's device layer—a VLAN, a bridge, a bonding, or a paravirtualized device. Common functionality, such as address configuration, is implemented by joint services that are layered on top of these device specific services, without having to implement them specifically.
The wicked framework implements these two aspects by using a variety of dbus services, which get attached to a network interface depending on its type. Here is a rough overview of the current object hierarchy in wicked.
Each network interface is represented via a child object of
/org/opensuse/Network/Interfaces
. The name of
the child object is given by its ifindex. For example, the loopback
interface, which usually gets ifindex 1, is
/org/opensuse/Network/Interfaces/1
, the first
Ethernet interface registered is
/org/opensuse/Network/Interfaces/2
.
Each network interface has a “class” associated with it,
which is used to select the dbus interfaces it supports. By default, each
network interface is of class netif
, and wickedd
will
automatically attach all interfaces compatible with this class. In the
current implementation, this includes the following interfaces:
Generic network interface functions, such as taking the link up or down, assigning an MTU, etc.
Address configuration services for DHCP, IPv4 zeroconf, etc.
Beyond this, network interfaces may require or offer special
configuration mechanisms. For example, for an Ethernet device, you may
want to be able to control the link speed, offloading of checksumming,
etc. To achieve this, Ethernet devices have a class of their own, called
netif-ethernet
, which is a subclass of
netif
. As a consequence, the dbus interfaces assigned
to an Ethernet interface include all the services listed above, plus
org.opensuse.Network.Ethernet
, which is a
service available only to objects belonging to the
netif-ethernet
class.
Similarly, there exist classes for interface types like bridges, VLANs, bonds, or infinibands.
How do you interact with an interface that needs to be created
first—such as a VLAN, which is really a virtual network
interface that sits on top of an Ethernet device. For these, wicked
defines factory interfaces, such as
org.opensuse.Network.VLAN.Factory
. Such a
factory interface offers a single function that lets you create an
interface of the requested type. These factory interfaces are attached to
the /org/opensuse/Network/Interfaces
list node.
wicked
currently supports:
Configuration file back-ends to parse SUSE style
/etc/sysconfig/network
files.
An internal configuration back-end to represent network interface configuration in XML.
Bring up and shutdown of “normal” network interfaces such as Ethernet or InfiniBand, VLAN, bridge, bonds, tun, tap, dummy, macvlan, macvtap, hsi, qeth, iucv, and wireless (currently limited to one wpa-psk/eap network) devices.
A built-in DHCPv4 client and a built-in DHCPv6 client.
The nanny daemon (enabled by default) helps to automatically bring up configured interfaces as soon as the device is available (interface hotplugging) and set up the IP configuration when a link (carrier) is detected.
wicked
#
On SUSE Linux Enterprise, wicked
is running by default. In case
you want to check what is currently enabled and whether it is running,
call:
systemctl status network
If wicked
is enabled, you will see something along
these lines:
wicked.service - wicked managed network interfaces Loaded: loaded (/usr/lib/systemd/system/wicked.service; enabled) ...
In case something different is running (for example, NetworkManager) and you
want to switch to wicked
, first stop what is running
and then enable wicked
:
systemctl is-active network && \ systemctl stop network systemctl enable --force wicked
This enables the wicked services, creates the
network.service
to
wicked.service
alias link, and starts the network
at the next boot.
Starting the server process:
systemctl start wickedd
This starts wickedd
(the main server) and associated
supplicants:
/usr/lib/wicked/bin/wickedd-auto4 --systemd --foreground /usr/lib/wicked/bin/wickedd-dhcp4 --systemd --foreground /usr/lib/wicked/bin/wickedd-dhcp6 --systemd --foreground /usr/sbin/wickedd --systemd --foreground /usr/sbin/wickedd-nanny --systemd --foreground
Then bringing up the network:
systemctl start wicked
Alternatively use the network.service
alias:
systemctl start network
These commands are using the default or system configuration sources as
defined in /etc/wicked/client.xml
.
To enable debugging, set WICKED_DEBUG
in
/etc/sysconfig/network/config
, for example:
WICKED_DEBUG="all"
Or, to omit some:
WICKED_DEBUG="all,-dbus,-objectmodel,-xpath,-xml"
Use the client utility to display interface information for all interfaces or the interface specified with ifname:
wicked show all wicked show ifname
In XML output:
wicked show-xml all wicked show-xml ifname
Bringing up one interface:
wicked ifup eth0 wicked ifup wlan0 ...
Because there is no configuration source specified, the wicked client
checks its default sources of configuration defined in
/etc/wicked/client.xml
:
firmware:
iSCSI Boot Firmware Table (iBFT)
compat:
ifcfg
files—implemented for compatibility
Whatever wicked
gets from those sources for a given
interface is applied. The intended order of importance is
firmware
, then
compat
—this may be changed in the future.
For more information, see the wicked
man page.
Nanny is an event and policy driven daemon that is responsible for
asynchronous or unsolicited scenarios such as hotplugging devices. Thus
the nanny daemon helps with starting or restarting delayed or temporarily
gone devices. Nanny monitors device and link changes, and integrates new
devices defined by the current policy set. Nanny continues to set up even
if ifup
already exited because of specified timeout
constraints.
By default, the nanny daemon is active on the system.
It is enabled in the
/etc/wicked/common.xml
configuration file:
<config> ... <use-nanny>true</use-nanny> </config>
This setting causes ifup and ifreload to apply a policy with the effective
configuration to the nanny daemon; then, nanny configures wickedd
and thus ensures hotplug support. It
waits in the background for events or changes (such as new devices or
carrier on).
For bonds and bridges, it may make sense to define the entire device topology in one file (ifcfg-bondX), and bring it up in one go. wicked then can bring up the whole configuration if you specify the top level interface names (of the bridge or bond):
wicked ifup br0
This command automatically sets up the bridge and its dependencies in the appropriate order without the need to list the dependencies (ports, etc.) separately.
To bring up multiple interfaces in one command:
wicked ifup bond0 br0 br1 br2
Or also all interfaces:
wicked ifup all
With wicked
, there is no need to actually take down
an interface to reconfigure it (unless it is required by the Kernel).
For example, to add another IP address or route to a statically
configured network interface, add the IP address to the interface
definition, and do another “ifup” operation. The server
will try hard to update only those settings that have changed. This
applies to link-level options such as the device MTU or the MAC address,
and network-level settings, such as addresses, routes, or even
the address configuration mode (for example, when moving from a static
configuration to DHCP).
Things get tricky of course with virtual interfaces combining several real devices such as bridges or bonds. For bonded devices, it is not possible to change certain parameters while the device is up. Doing that will result in an error.
However, what should still work, is the act of adding or removing the child devices of a bond or bridge, or choosing a bond's primary interface.
wicked
is designed to be extensible with shell
scripts. These extensions can be defined in the
config.xml
file.
Currently, several different classes of extensions are supported:
link configuration: these are scripts responsible for setting up a device's link layer according to the configuration provided by the client, and for tearing it down again.
address configuration: these are scripts responsible for managing a
device's address configuration. Usually address configuration and DHCP
are managed by wicked
itself, but can be
implemented by means of extensions.
firewall extension: these scripts can apply firewall rules.
Typically, extensions have a start and a stop command, an optional “pid file”, and a set of environment variables that get passed to the script.
To illustrate how this is supposed to work, look at a firewall extension
defined in etc/server.xml
:
<dbus-service interface="org.opensuse.Network.Firewall"> <action name="firewallUp" command="/etc/wicked/extensions/firewall up"/> <action name="firewallDown" command="/etc/wicked/extensions/firewall down"/> <!-- default environment for all calls to this extension script --> <putenv name="WICKED_OBJECT_PATH" value="$object-path"/> <putenv name="WICKED_INTERFACE_NAME" value="$property:name"/> <putenv name="WICKED_INTERFACE_INDEX" value="$property:index"/> </dbus-service>
The extension is attached to the dbus-service interface and defines commands to execute for the actions of this interface. Further, the declaration can define and initialize environment variables passed to the actions.
You can extend the handling of configuration files with scripts as well.
For example, DNS updates from leases are ultimately handled by the
extensions/resolver
script, with behavior
configured in server.xml
:
<system-updater name="resolver"> <action name="backup" command="/etc/wicked/extensions/resolver backup"/> <action name="restore" command="/etc/wicked/extensions/resolver restore"/> <action name="install" command="/etc/wicked/extensions/resolver install"/> <action name="remove" command="/etc/wicked/extensions/resolver remove"/> </system-updater>
When an update arrives in wickedd
, the system
updater routines parse the lease and call the appropriate commands
(backup
, install
, etc.) in the
resolver script. This in turn configures the DNS settings using
/sbin/netconfig
, or by manually writing
/etc/resolv.conf
as a fallback.
This section provides an overview of the network configuration files and explains their purpose and the format used.
/etc/sysconfig/network/ifcfg-*
#These files contain the traditional configurations for network interfaces. In SUSE Linux Enterprise 11, this was the only supported format besides iBFT firmware.
wicked
and the ifcfg-*
Files
wicked
reads these files if you specify the
compat:
prefix. According to the SUSE Linux Enterprise Server 12 default
configuration in /etc/wicked/client.xml
,
wicked
tries these files before the XML configuration
files in /etc/wicked/ifconfig
.
The --ifconfig
switch is provided mostly for testing
only. If specified, default configuration sources defined in
/etc/wicked/ifconfig
are not applied.
The ifcfg-*
files include information such as the
start mode and the IP address. Possible parameters are described in the
manual page of ifup
. Additionally, most
variables from the dhcp
and
wireless
files can be used in the
ifcfg-*
files if a general setting should be used
for only one interface. However, most of the
/etc/sysconfig/network/config
variables are global
and cannot be overridden in ifcfg-files. For example,
NETCONFIG_*
variables are global.
For configuring macvlan
and
macvtab
interfaces, see the
ifcfg-macvlan
and
ifcfg-macvtap
man pages. For example, for a
macvlan interface provide a ifcfg-macvlan0
with
settings as follows:
STARTMODE='auto' MACVLAN_DEVICE='eth0' #MACVLAN_MODE='vepa' #LLADDR=02:03:04:05:06:aa
For ifcfg.template
, see
Section 13.6.2.2, “/etc/sysconfig/network/config
, /etc/sysconfig/network/dhcp
, and /etc/sysconfig/network/wireless
”.
/etc/sysconfig/network/config
, /etc/sysconfig/network/dhcp
, and /etc/sysconfig/network/wireless
#
The file config
contains general settings for the
behavior of ifup
, ifdown
and
ifstatus
. dhcp
contains settings
for DHCP and wireless
for wireless LAN cards. The
variables in all three configuration files are commented. Some
variables from /etc/sysconfig/network/config
can
also be used in ifcfg-*
files, where they are given
a higher priority. The
/etc/sysconfig/network/ifcfg.template
file lists
variables that can be specified in a per interface scope. However, most
of the /etc/sysconfig/network/config
variables are
global and cannot be overridden in ifcfg-files. For example,
NETWORKMANAGER
or
NETCONFIG_*
variables are global.
In SUSE Linux Enterprise 11, DHCPv6 used to work even on networks where IPv6 Router Advertisements (RAs) were not configured properly. Starting with SUSE Linux Enterprise 12, DHCPv6 will correctly require that at least one of the routers on the network sends out RAs that indicate that this network is managed by DHCPv6.
For those networks where the router cannot be configured correctly,
there is an ifcfg
option that allows the user to
override this behavior by specifying
DHCLIENT6_MODE='managed'
in the
ifcfg
file.
You can also activate this workaround with a boot parameter in the
installation system:
ifcfg=eth0=dhcp6,DHCLIENT6_MODE=managed
/etc/sysconfig/network/routes
and /etc/sysconfig/network/ifroute-*
#
The static routing of TCP/IP packets is determined by the
/etc/sysconfig/network/routes
and
/etc/sysconfig/network/ifroute-*
files. All the
static routes required by the various system tasks can be specified in
/etc/sysconfig/network/routes
: routes to a host,
routes to a host via a gateway and routes to a network. For each
interface that needs individual routing, define an additional
configuration file:
/etc/sysconfig/network/ifroute-*
. Replace the wild
card (*
) with the name of the interface. The entries
in the routing configuration files look like this:
# Destination Gateway Netmask Interface Options
The route's destination is in the first column. This column may contain
the IP address of a network or host or, in the case of
reachable name servers, the fully qualified network
or host name. The network should be written in CIDR notation (address
with the associated routing prefix-length) such as 10.10.0.0/16 for IPv4
or fc00::/7 for IPv6 routes. The keyword default
indicates that the route is the default gateway in the same address
family as the gateway. For devices without a gateway use explicit
0.0.0.0/0 or ::/0 destinations.
The second column contains the default gateway or a gateway through which a host or network can be accessed.
The third column is deprecated; it used to contain the IPv4 netmask of
the destination. For IPv6 routes, the default route, or when using a
prefix-length (CIDR notation) in the first column, enter a dash
(-
) here.
The fourth column contains the name of the interface. If you leave it
empty using a dash (-
), it can cause unintended
behavior in /etc/sysconfig/network/routes
. For more
information, see the routes
man page.
An (optional) fifth column can be used to specify special options. For
details, see the routes
man page.
# --- IPv4 routes in CIDR prefix notation: # Destination [Gateway] - Interface 127.0.0.0/8 - - lo 204.127.235.0/24 - - eth0 default 204.127.235.41 - eth0 207.68.156.51/32 207.68.145.45 - eth1 192.168.0.0/16 207.68.156.51 - eth1 # --- IPv4 routes in deprecated netmask notation" # Destination [Dummy/Gateway] Netmask Interface # 127.0.0.0 0.0.0.0 255.255.255.0 lo 204.127.235.0 0.0.0.0 255.255.255.0 eth0 default 204.127.235.41 0.0.0.0 eth0 207.68.156.51 207.68.145.45 255.255.255.255 eth1 192.168.0.0 207.68.156.51 255.255.0.0 eth1 # --- IPv6 routes are always using CIDR notation: # Destination [Gateway] - Interface 2001:DB8:100::/64 - - eth0 2001:DB8:100::/32 fe80::216:3eff:fe6d:c042 - eth0
/etc/resolv.conf
#
The domain to which the host belongs is specified in
/etc/resolv.conf
(keyword
search
). Up to six domains with a total of 256
characters can be specified with the search
option. When resolving a name that is not fully qualified, an attempt is
made to generate one by attaching the individual
search
entries. Up to 3 name servers can be
specified with the nameserver
option, each on a
line of its own. Comments are preceded by hash mark or semicolon signs
(#
or ;
). As an example, see
Example 13.6, “/etc/resolv.conf
”.
However, the /etc/resolv.conf
should not be edited
by hand. Instead, it is generated by the netconfig
script. To define static DNS configuration without using YaST, edit
the appropriate variables manually in the
/etc/sysconfig/network/config
file:
NETCONFIG_DNS_STATIC_SEARCHLIST
list of DNS domain names used for host name lookup
NETCONFIG_DNS_STATIC_SERVERS
list of name server IP addresses to use for host name lookup
NETCONFIG_DNS_FORWARDER
the name of the DNS forwarder that needs to be configured, for example
bind
or resolver
NETCONFIG_DNS_RESOLVER_OPTIONS
arbitrary options that will be written to
/etc/resolv.conf
, for example:
debug attempts:1 timeout:10
For more information, see the resolv.conf
man page.
NETCONFIG_DNS_RESOLVER_SORTLIST
list of up to 10 items, for example:
130.155.160.0/255.255.240.0 130.155.0.0
For more information, see the resolv.conf
man
page.
To disable DNS configuration using netconfig, set
NETCONFIG_DNS_POLICY=''
. For more information about
netconfig
, see the
netconfig(8)
man page (man 8
netconfig
).
/etc/resolv.conf
## Our domain search example.com # # We use dns.example.com (192.168.1.116) as nameserver nameserver 192.168.1.116
/sbin/netconfig
#
netconfig
is a modular tool to manage additional
network configuration settings. It merges statically defined settings
with settings provided by autoconfiguration mechanisms as DHCP or PPP
according to a predefined policy. The required changes are applied to the
system by calling the netconfig modules that are responsible for
modifying a configuration file and restarting a service or a similar
action.
netconfig
recognizes three main actions. The
netconfig modify
and netconfig
remove
commands are used by daemons such as DHCP or PPP to
provide or remove settings to netconfig. Only the netconfig
update
command is available for the user:
modify
The netconfig modify
command modifies the current
interface and service specific dynamic settings and updates the
network configuration. Netconfig reads settings from standard input or
from a file specified with the --lease-file
filename
option and internally
stores them until a system reboot (or the next modify or remove
action). Already existing settings for the same interface and service
combination are overwritten. The interface is specified by the
-i interface_name
parameter. The service is specified by the -s
service_name
parameter.
remove
The netconfig remove
command removes the dynamic
settings provided by a modificatory action for the specified interface
and service combination and updates the network configuration. The
interface is specified by the -i
interface_name
parameter. The
service is specified by the -s
service_name
parameter.
update
The netconfig update
command updates the network
configuration using current settings. This is useful when the policy
or the static configuration has changed. Use the -m
module_type
parameter, if you want
to update a specified service only (dns
,
nis
, or ntp
).
The netconfig policy and the static configuration settings are defined
either manually or using YaST in the
/etc/sysconfig/network/config
file. The dynamic
configuration settings provided by autoconfiguration tools such as DHCP
or PPP are delivered directly by these tools with the netconfig
modify
and netconfig remove
actions.
When NetworkManager is enabled, netconfig (in policy mode
auto
) uses only NetworkManager settings, ignoring settings
from any other interfaces configured using the traditional ifup method.
If NetworkManager does not provide any setting, static settings are used as a
fallback. A mixed usage of NetworkManager and the wicked
method is not supported.
For more information about netconfig
, see man
8 netconfig
.
/etc/hosts
#
In this file, shown in Example 13.7, “/etc/hosts
”, IP addresses
are assigned to host names. If no name server is implemented, all hosts
to which an IP connection will be set up must be listed here. For each
host, enter a line consisting of the IP address, the fully qualified host
name, and the host name into the file. The IP address must be at the
beginning of the line and the entries separated by blanks and tabs.
Comments are always preceded by the #
sign.
/etc/hosts
#127.0.0.1 localhost 192.168.2.100 jupiter.example.com jupiter 192.168.2.101 venus.example.com venus
/etc/networks
#
Here, network names are converted to network addresses. The format is
similar to that of the hosts
file, except the
network names precede the addresses. See
Example 13.8, “/etc/networks
”.
/etc/networks
#loopback 127.0.0.0 localnet 192.168.0.0
/etc/host.conf
#
Name resolution—the translation of host and network names via
the resolver library—is controlled by this
file. This file is only used for programs linked to libc4 or libc5. For
current glibc programs, refer to the settings in
/etc/nsswitch.conf
. Each parameter must always be
entered on a separate line. Comments are preceded by a #
sign. Table 13.2, “Parameters for /etc/host.conf” shows the parameters
available. A sample /etc/host.conf
is shown in
Example 13.9, “/etc/host.conf
”.
order hosts, bind |
Specifies in which order the services are accessed for the name resolution. Available arguments are (separated by blank spaces or commas): |
hosts: searches the
| |
bind: accesses a name server | |
nis: uses NIS | |
multi on/off |
Defines if a host entered in |
nospoof on spoofalert on/off |
These parameters influence the name server spoofing but do not exert any influence on the network configuration. |
trim domainname |
The specified domain name is separated from the host name after host
name resolution (as long as the host name includes the domain name).
This option is useful only if names from the local domain are in the
|
/etc/host.conf
## We have named running order hosts bind # Allow multiple address multi on
/etc/nsswitch.conf
#
The introduction of the GNU C Library 2.0 was accompanied by the
introduction of the Name Service Switch (NSS). Refer
to the nsswitch.conf(5)
man page and
The GNU C Library Reference Manual for details.
The order for queries is defined in the file
/etc/nsswitch.conf
. A sample
nsswitch.conf
is shown in
Example 13.10, “/etc/nsswitch.conf
”. Comments are preceded by
#
signs. In this example, the entry under the
hosts
database means that a request is sent to
/etc/hosts
(files
) via
DNS (see Chapter 19, The Domain Name System).
/etc/nsswitch.conf
#passwd: compat group: compat hosts: files dns networks: files dns services: db files protocols: db files rpc: files ethers: files netmasks: files netgroup: files nis publickey: files bootparams: files automount: files nis aliases: files nis shadow: compat
The “databases” available over NSS are listed in Table 13.3, “Databases Available via /etc/nsswitch.conf”. The configuration options for NSS databases are listed in Table 13.4, “Configuration Options for NSS “Databases””.
|
Mail aliases implemented by |
|
Ethernet addresses. |
|
List of networks and their subnet masks. Only needed, if you use subnetting. |
|
User groups used by |
|
Host names and IP addresses, used by
|
|
Valid host and user lists in the network for controlling access
permissions; see the |
|
Network names and addresses, used by |
|
Public and secret keys for Secure_RPC used by NFS and NIS+. |
|
User passwords, used by |
|
Network protocols, used by |
|
Remote procedure call names and addresses, used by
|
|
Network services, used by |
|
Shadow passwords of users, used by |
|
directly access files, for example,
|
|
access via a database |
|
NIS, see also Book “Security Guide”, Chapter 3 “Using NIS” |
|
can only be used as an extension for |
|
can only be used as an extension for |
/etc/nscd.conf
#
This file is used to configure nscd (name service cache daemon). See the
nscd(8)
and
nscd.conf(5)
man pages. By default, the system
entries of passwd
and groups
are cached
by nscd. This is important for the performance of directory services,
like NIS and LDAP, because otherwise the network connection needs to be
used for every access to names or groups. hosts
is not
cached by default, because the mechanism in nscd to cache hosts makes the
local system unable to trust forward and reverse lookup checks. Instead
of asking nscd to cache names, set up a caching DNS server.
If the caching for passwd
is activated, it usually takes
about fifteen seconds until a newly added local user is recognized.
Reduce this waiting time by restarting nscd with:
systemctl restart nscd
/etc/HOSTNAME
#
/etc/HOSTNAME
contains the fully qualified host name
(FQHN). The fully qualified host name is the host name with the domain
name attached. This file must contain only one line (in which the host
name is set). It is read while the machine is booting.
Before you write your configuration to the configuration files, you can
test it. To set up a test configuration, use the ip
command. To test the connection, use the ping
command.
The command ip
changes the network configuration
directly without saving it in the configuration file. Unless you enter
your configuration in the correct configuration files, the changed
network configuration is lost on reboot.
ifconfig
and route
Are Obsolete
The ifconfig
and route
tools are
obsolete. Use ip
instead.
ifconfig
, for example, limits interface names to
9 characters.
ip
#
ip
is a tool to show and configure network devices,
routing, policy routing, and tunnels.
ip
is a very complex tool. Its common syntax is
ip
options
object
command
. You can work with the
following objects:
This object represents a network device.
This object represents the IP address of device.
This object represents an ARP or NDISC cache entry.
This object represents the routing table entry.
This object represents a rule in the routing policy database.
This object represents a multicast address.
This object represents a multicast routing cache entry.
This object represents a tunnel over IP.
If no command is given, the default command is used (usually
list
).
Change the state of a device with the command ip link
set
device_name
. For example, to deactivate device eth0, enter
ip link
set
eth0 down
. To activate it again, use
ip link set
eth0 up
.
After activating a device, you can configure it. To set the IP address,
use ip addr
add
ip_address +
dev device_name
. For example, to set
the address of the interface eth0 to 192.168.12.154/30 with standard
broadcast (option brd
), enter ip
addr
add 192.168.12.154/30 brd + dev
eth0
.
To have a working connection, you must also configure the default
gateway. To set a gateway for your system, enter ip route
add
gateway_ip_address
. To translate
one IP address to another, use nat
: ip route
add
nat
ip_address
via
other_ip_address
.
To display all devices, use ip link ls
. To display
the running interfaces only, use ip link ls up
. To
print interface statistics for a device, enter ip -s link
ls
device_name
. To view addresses of
your devices, enter ip addr
. In the output of the
ip addr
, also find information about MAC addresses of
your devices. To show all routes, use ip route show
.
For more information about using ip
, enter
ip
help
or see the
ip(8)
man page. The help
option is also available for all ip
subcommands. If,
for example, you need help for
ip
addr
, enter
ip
addr help
. Find the
ip
manual in
/usr/share/doc/packages/iproute2/ip-cref.pdf
.
The ping
command is the standard tool for testing
whether a TCP/IP connection works. It uses the ICMP protocol to send a
small data packet, ECHO_REQUEST datagram, to the destination host,
requesting an immediate reply. If this works, ping
displays a message to that effect. This indicates that the network link
is functioning.
ping
does more than only test the function of the
connection between two computers: it also provides some basic
information about the quality of the connection. In
Example 13.11, “Output of the Command ping”, you can see an example of
the ping
output. The second-to-last line contains
information about the number of transmitted packets, packet loss, and
total time of ping
running.
As the destination, you can use a host name or IP address, for example,
ping
example.com
or
ping
192.168.3.100
. The program
sends packets until you press
Ctrl–C.
If you only need to check the functionality of the connection, you can
limit the number of the packets with the -c
option. For
example to limit ping to three packets, enter
ping
-c 3
example.com
.
ping -c 3 example.com PING example.com (192.168.3.100) 56(84) bytes of data. 64 bytes from example.com (192.168.3.100): icmp_seq=1 ttl=49 time=188 ms 64 bytes from example.com (192.168.3.100): icmp_seq=2 ttl=49 time=184 ms 64 bytes from example.com (192.168.3.100): icmp_seq=3 ttl=49 time=183 ms --- example.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2007ms rtt min/avg/max/mdev = 183.417/185.447/188.259/2.052 ms
The default interval between two packets is one second. To change the
interval, ping provides the option -i
. For example, to
increase the ping interval to ten seconds, enter
ping
-i 10
example.com
.
In a system with multiple network devices, it is sometimes useful to
send the ping through a specific interface address. To do so, use the
-I
option with the name of the selected device, for
example, ping
-I wlan1
example.com
.
For more options and information about using ping, enter
ping
-h
or see the
ping (8)
man page.
For IPv6 addresses use the ping6
command. Note, to
ping link-local addresses, you must specify the interface with
-I
. The following command works, if the address is
reachable via eth1
:
ping6 -I eth1 fe80::117:21ff:feda:a425
Apart from the configuration files described above, there are also
systemd unit files and various scripts that load the network services
while the machine is booting. These are started as soon as the system is
switched to the multi-user.target
target. Some
of these unit files and scripts are described in
Some Unit Files and Start-Up Scripts for Network Programs. For more information about
systemd
, see Chapter 10, The systemd
Daemon and
for more information about the systemd
targets,
see the man page of systemd.special
(man systemd.special
).
network.target
network.target
is the systemd target for
networking, but its mean depends on the settings provided by the
system administrator.
For more information, see http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/.
multi-user.target
multi-user.target
is the systemd target for a
multiuser system with all required network services.
xinetd
Starts xinetd. xinetd can be used to make server services available on the system. For example, it can start vsftpd whenever an FTP connection is initiated.
rpcbind
Starts the rpcbind utility that converts RPC program numbers to universal addresses. It is needed for RPC services, such as an NFS server.
ypserv
Starts the NIS server.
ypbind
Starts the NIS client.
/etc/init.d/nfsserver
Starts the NFS server.
/etc/init.d/postfix
Controls the postfix process.
A router is a networking device that delivers and receives data (network pakets) to or from more than one network back and forth. You often use a router to connect your local network to the remote network (Internet) or to connect local network segments. With SUSE Linux Enterprise Server you can build a router with features such as NAT (Network Address Translation) or advanced firewalling.
The following are basic steps to turn SUSE Linux Enterprise Server into a router.
Enable forwarding, for example in
/etc/sysctl.d/50-router.conf
net.ipv4.conf.all.forwarding = 1 net.ipv6.conf.all.forwarding = 1
Then provide a static IPv4 and IPv6 IP setup for the interfaces. Enabling forwarding disables several mechanisms, such as IPv6 does not accept an IPv6 RA (router advertisement) anymore, what also prevents the creation of a default route.
In many situations, when you can reach the same (internal) network via more than one interface or usually, when vpn is used (and already on “normal multi-home hosts”), you must disable the IPv4 reverse path filter (this feature does currently not exist for IPv6):
net.ipv4.conf.all.rp_filter = 0
You can also filter with firewall settings instead.
To accept an IPv6 RA (from the router on an external, uplink, or ISP interface) and create a default (or also a more specific) IPv6 route again, set:
net.ipv6.conf.${ifname}.accept_ra = 2 net.ipv6.conf.${ifname}.autoconf = 0
(Note: “eth0.42
” needs to be
written as eth0/42
in a dot separated sysfs path.)
More router behaviour and forwarding dependencies are described in https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt.
To provide IPv6 on your internal (DMZ) interfaces, and announce
yourself as an IPv6 router and “autoconf networks” to the
clients, install and configure radvd
in
/etc/radvd.conf
, for example:
interface eth0 { IgnoreIfMissing on; # do not fail if interface missed AdvSendAdvert on; # enable sending RAs AdvManagedFlag on; # IPv6 addresses managed via DHCPv6 AdvOtherConfigFlag on; # DNS, NTP... only via DHCPv6 AdvDefaultLifetime 3600; # client default route lifetime of 1 hour prefix 2001:db8:0:1::/64 # (/64 is default and required for autoconf) { AdvAutonomous off; # Disable address autoconf (DHCPv6 only) AdvValidLifetime 3600; # prefix (autoconf addr) is valid 1 h AdvPreferredLifetime 1800; # prefix (autoconf addr) is prefered 1/2 h } }
Lastly configure the firewall. In SuSEfirewall2, must set
FW_ROUTE="yes"
(otherwise it will also reset
forwarding sysctl again) and define the interfaces in the
FW_DEV_INT
, FW_DEV_EXT
(and
FW_DEV_DMZ
) zone variables as needed, perhaps also
FW_MASQUERADE="yes"
and
FW_MASQ_DEV
.
For some systems, there is a desire to implement network connections that comply to more than the standard data security or availability requirements of a typical Ethernet device. In these cases, several Ethernet devices can be aggregated to a single bonding device.
The configuration of the bonding device is done by means of bonding module
options. The behavior is mainly affected by the mode of the bonding
device. By default, this is mode=active-backup
which means that a different slave device will become active if the active
slave fails.
Using bonding devices is only of interest for machines where you have multiple real network cards available. In most configurations, this means that you should use the bonding configuration only in Dom0. Only if you have multiple network cards assigned to a VM Guest system it may also be useful to set up the bond in a VM Guest.
To configure a bonding device, use the following procedure:
Run
› › .Use
and change the to . Proceed with .Select how to assign the IP address to the bonding device. Three methods are at your disposal:
No IP Address
Dynamic Address (with DHCP or Zeroconf)
Statically assigned IP Address
Use the method that is appropriate for your environment.
In the
tab, select the Ethernet devices that should be included into the bond by activating the related check box.Edit the
. The modes that are available for configuration are the following:balance-rr
active-backup
balance-xor
broadcast
802.3ad
802.3ad
is the standardized LACP “IEEE
802.3ad Dynamic link aggregation” mode.
balance-tlb
balance-alb
Make sure that the parameter miimon=100
is added to
the . Without this parameter, the
data integrity is not checked regularly.
Click
and leave YaST with to create the device.
All modes, and many more options are explained in detail in the
/usr/src/linux/Documentation/networking/bonding.txt
after installing the package kernel-source
.
In specific network environments (such as High Availability), there are cases when you need to replace a bonding slave interface with another one. The reason may be a constantly failing network device. The solution is to set up hotplugging of bonding slaves.
The bond is configured as usual (according to man 5
ifcfg-bonding
), for example:
ifcfg-bond0 STARTMODE='auto' # or 'onboot' BOOTPROTO='static' IPADDR='192.168.0.1/24' BONDING_MASTER='yes' BONDING_SLAVE_0='eth0' BONDING_SLAVE_1='eth1' BONDING_MODULE_OPTS='mode=active-backup miimon=100'
The slaves are specified with STARTMODE=hotplug
and
BOOTPROTO=none
:
ifcfg-eth0 STARTMODE='hotplug' BOOTPROTO='none' ifcfg-eth1 STARTMODE='hotplug' BOOTPROTO='none'
BOOTPROTO=none
uses the ethtool
options (when provided), but does not set the link up on ifup
eth0
. The reason is that the slave interface is controlled by
the bond master.
STARTMODE=hotplug
causes the slave interface to join
the bond automatically as soon as it is available.
The udev
rules in
/etc/udev/rules.d/70-persistent-net.rules
need to be
changed to match the device by bus ID (udev
KERNELS
keyword equal to "SysFS BusID" as
visible in hwinfo --netcard
) instead of by MAC address
to allow to replacement of defective hardware (a network card in the same
slot but with a different MAC), and to avoid confusion as the bond
changes the MAC address of all its slaves.
For example:
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", KERNELS=="0000:00:19.0", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
At boot time, the systemd network.service
does
not wait for the hotplug slaves, but for the bond to become ready, which
requires at least one available slave. When one of the slave interfaces
gets removed (unbind from NIC driver, rmmod
of the NIC
driver or true PCI hotplug remove) from the system, the kernel removes it
from the bond automatically. When a new card is added to the system
(replacement of the hardware in the slot), udev
renames it using the bus-based persistent name rule to the name of the
slave, and calls ifup
for it. The
ifup
call automatically joins it into the bond.
Software-defined networking (SDN) means separating the system that controls where traffic is sent (the control plane) from the underlying system that forwards traffic to the selected destination (the data plane, also called the forwarding plane). That means that the functions previously fulfilled by a single, usually inflexible switch can now be separated between a switch (data plane) and its controller (control plane). In this model, the controller is programmable and can be very flexible and adapt quickly to changing network conditions.
Open vSwitch is software that implements a distributed virtual multilayer switch that is compatible with the OpenFlow protocol. OpenFlow allows a controller application to modify the configuration of a switch. OpenFlow is layered onto the TCP protocol and is implemented in a range of hardware and software. A single controller can thus drive multiple, very different switches.
Software-defined networking with Open vSwitch brings several advantages with it, especially when you used together with virtual machines:
Networking states can be identified easily.
Networks and their live state can be moved from one host to another.
Network dynamics are traceable and external software can be enabled to respond to them.
Apply and manipulate tags in network packets to identify which machine they are coming from or going to and maintain other networking context. Tagging rules can be configured and migrated.
Open vSwitch implements the GRE protocol (Generic Routing Encapsulation). This allows you to, for example, connect private VM networks to each other.
Open vSwitch can be used on its own, but is designed to integrate with networking hardware and can control hardware switches.
Install Open vSwitch and supplemental packages:
root #
zypper
install openvswitch openvswitch-switch
If you plan to use Open vSwitch together with the KVM hypervisor, additionally install tunctl. If you plan to use Open vSwitch together with the Xen hypervisor, additionally install openvswitch-kmp-xen.
Enable the Open vSwitch service:
root #
systemctl
enable openvswitch
Either restart the computer or use systemctl
to start the
Open vSwitch service immediately:
root #
systemctl
start openvswitch
To check whether Open vSwitch was activated correctly, use:
root #
systemctl
status openvswitch
Open vSwitch consists of several components. Among them are a kernel module and various userspace components. The kernel module is used for accelerating the data path, but is not necessary for a minimal Open vSwitch installation.
The central executables of Open vSwitch are its two daemons.
When you start the openvswitch
service, you are
indirectly starting them.
The main Open vSwitch daemon (ovs-vswitchd
) provides the
implementation of a switch.
The Open vSwitch database daemon (ovsdb-server
) serves the
database that stores the configuration and state of Open vSwitch.
Open vSwitch also comes with several utilities that help you work with it. The following list is not exhaustive, but instead describes important commands only.
ovsdb-tool
Create, upgrade, compact, and query Open vSwitch databases. Do transactions on Open vSwitch databases.
ovs-appctl
Configure a running ovs-vswitchd
or
ovsdb-server
daemon.
ovs-dpctl
, ovs-dpctl-top
Create, modify, visualize and delete data paths.
Using this tool can interfere with ovs-vswitchd
also
performing data path management.
Therefore, it is often used for diagnostics only.
ovs-dpctl-top
creates a top
-like
visualization for data paths.
ovs-ofctl
Manage any switches adhering to the
OpenFlow protocol.
ovs-ofctl
is not limited to interacting with Open vSwitch.
ovs-vsctl
Provides a high-level interface to the configuration database.
It can be used to query and modify the database.
In effect, it shows the status of ovs-vswitchd
and can be used to configure it.
The following example configuration uses the Wicked network service that is used by default on openSUSE Leap. To learn more about Wicked, see Section 13.6, “Configuring a Network Connection Manually”.
When you have installed and started Open vSwitch, proceed as follows:
To configure a bridge for use by your virtual machine, create a file with content like this:
STARTMODE='auto'1 BOOTPROTO='dhcp'2 OVS_BRIDGE='yes'3 OVS_BRIDGE_PORT_DEVICE_1='eth0'4
Set up the bridge automatically when the network service is started. | |
The protocol to use for configuring the IP address. | |
Mark the configuration as an Open vSwitch bridge. | |
Choose which device/devices should be added to the bridge. To add more devices, append additional lines for each of them to the file: OVS_BRIDGE_PORT_DEVICE_SUFFIX='DEVICE' The SUFFIX can be any alphanumeric string. However, to avoid overwriting a previous definition, make sure the SUFFIX of each device is unique. |
Save the file in the directory /etc/sysconfig/network
under the name ifcfg-br0
.
Instead of br0, you can use any name you want.
However, the file name needs to begin with ifcfg-
.
To learn about further options, refer to the man pages of
ifcfg
(man 5 ifcfg
) and
ifcfg-ovs-bridge
(man 5 ifcfg-ovs-bridge
).
Now start the bridge:
root #
wicked
ifup br0
When Wicked is done, it should output the name of the bridge and next to
it the state up
.
After having created the bridge as described before in Section 13.9.4, “Creating a Bridge with Open vSwitch”, you can use Open vSwitch to manage the network access of virtual machines created with KVM/QEMU.
To be able to best use the capabilities of Wicked, make some further
changes to the bridge configured before.
Open the previously created
/etc/sysconfig/network/ifcfg-br0
and append a line for
another port device:
OVS_BRIDGE_PORT_DEVICE_2='tap0'
Additionally, set BOOTPROTO
to none
.
The file should now look like this:
STARTMODE='auto' BOOTPROTO='none' OVS_BRIDGE='yes' OVS_BRIDGE_PORT_DEVICE_1='eth0' OVS_BRIDGE_PORT_DEVICE_2='tap0'
The new port device tap0 will be configured in the next step.
Now add a configuration file for the tap0 device:
STARTMODE='auto' BOOTPROTO='none' TUNNEL='tap'
Save the file in the directory /etc/sysconfig/network
under the name ifcfg-tap0
.
To be able to use this tap device from a virtual machine started as a
user who is not root
, append:
TUNNEL_SET_OWNER=USER_NAME
To allow access for an entire group, append:
TUNNEL_SET_GROUP=GROUP_NAME
Finally, open the configuration for the device defined as the first
OVS_BRIDGE_PORT_DEVICE
.
If you did not change the name, that should be eth0
.
Therefore, open /etc/sysconfig/network/ifcfg-eth0
and
make sure that the following options are set:
STARTMODE='auto' BOOTPROTO='none'
If the file does not exist yet, create it.
Restart the bridge interface using Wicked:
root #
wicked
ifreload br0
This will also trigger a reload of the newly defined bridge port devices.
To start a virtual machine, use, for example:
root #
qemu-kvm
\ -drive file=/PATH/TO/DISK-IMAGE1 \ -m 512 -net nic,vlan=0,macaddr=00:11:22:EE:EE:EE \ -net tap,ifname=tap0,script=no,downscript=no2
For further information on the usage of KVM/QEMU, see Book “Virtualization Guide”.
libvirt
#
After having created the bridge as described before in
Section 13.9.4, “Creating a Bridge with Open vSwitch”, you can add the bridge to an existing
virtual machine managed with libvirt
.
Since libvirt
has some support for Open vSwitch bridges already, you can use the
bridge created in Section 13.9.4, “Creating a Bridge with Open vSwitch” without further changes
to the networking configuration.
Open the domain XML file for the intended virtual machine:
root #
virsh
edit VM_NAME
Replace VM_NAME with the name of the desired virtual machine. This will open your default text editor.
Find the networking section of the document by looking for a section
starting with <interface type="...">
and ending in
</interface>
.
Replace the existing section with a networking section that looks somewhat like this:
<interface type='bridge'> <source bridge='br0'/> <virtualport type='openvswitch'/> </interface>
virsh iface-*
and Virtual Machine Manager with Open vSwitch
At the moment, the Open vSwitch compatibility of libvirt
is not exposed through
the virsh iface-*
tools and Virtual Machine Manager.
If you use any of these tools, your configuration can break.
You can now start or restart the virtual machine as usual.
For further information on the usage of libvirt
, see
Book “Virtualization Guide”.
The documentation section of the Open vSwitch project Web site
Whitepaper by the Open Networking Foundation about software-defined networking and the OpenFlow protocol
UEFI (Unified Extensible Firmware Interface) is the interface between the firmware that comes with the system hardware, all the hardware components of the system, and the operating system.
UEFI is becoming more and more available on PC systems and thus is replacing the traditional PC-BIOS. UEFI, for example, properly supports 64-bit systems and offers secure booting (“Secure Boot”, firmware version 2.3.1c or better required), which is one of its most important features. Lastly, with UEFI a standard firmware will become available on all x86 platforms.
UEFI additionally offers the following advantages:
Booting from large disks (over 2 TiB) with a GUID Partition Table (GPT).
CPU-independent architecture and drivers.
Flexible pre-OS environment with network capabilities.
CSM (Compatibility Support Module) to support booting legacy operating systems via a PC-BIOS-like emulation.
For more information, see http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface. The following sections are not meant as a general UEFI overview; these are only hints about how some features are implemented in SUSE Linux Enterprise.
In the world of UEFI, securing the bootstrapping process means establishing a chain of trust. The “platform” is the root of this chain of trust; in the context of SUSE Linux Enterprise, the mainboard and the on-board firmware could be considered the “platform”. Or, put slightly differently, it is the hardware vendor, and the chain of trust flows from that hardware vendor to the component manufacturers, the OS vendors, etc.
The trust is expressed via public key cryptography. The hardware vendor puts a so-called Platform Key (PK) into the firmware, representing the root of trust. The trust relationship with operating system vendors and others is documented by signing their keys with the Platform Key.
Finally, security is established by requiring that no code will be executed by the firmware unless it has been signed by one of these “trusted” keys—be it an OS boot loader, some driver located in the flash memory of some PCI Express card or on disk, or be it an update of the firmware itself.
Essentially, if you want to use Secure Boot, you need to have your OS loader signed with a key trusted by the firmware, and you need the OS loader to verify that the kernel it loads can be trusted.
Key Exchange Keys (KEK) can be added to the UEFI key database. This way, you can use other certificates, as long as they are signed with the private part of the PK.
Microsoft’s Key Exchange Key (KEK) is installed by default.
The Secure Boot feature is enabled by default on UEFI/x86_64 installations. You can find the
option in the tab of the dialog. It supports booting when the secure boot is activated in the firmware, while making it possible to boot when it is deactivated.The Secure Boot feature requires that a GUID Partitioning Table (GPT) replaces the old partitioning with a Master Boot Record (MBR). If YaST detects EFI mode during the installation, it will try to create a GPT partition. UEFI expects to find the EFI programs on a FAT-formatted EFI System Partition (ESP).
Supporting UEFI Secure Boot essentially requires having a boot loader with a digital signature that the firmware recognizes as a trusted key. To be useful for SUSE Linux Enterprise customers, that key is trusted by the firmware a priori, without requiring any manual intervention.
There are two ways of getting there. One is to work with hardware vendors to have them endorse a SUSE key, which SUSE then signs the boot loader with. The other way is to go through Microsoft’s Windows Logo Certification program to have the boot loader certified and have Microsoft recognize the SUSE signing key (that is, have it signed with their KEK). By now, SUSE got the loader signed by UEFI Signing Service (that is Microsoft in this case).
At the implementation layer, SUSE uses the shim
loader which is installed by default. It is a smart solution that avoids legal issues, and
simplifies the certification and signing step considerably. The
shim
loader’s job is to load a boot loader
such as ELILO or GRUB 2 and verify it; this boot loader in turn
will load kernels signed by a SUSE key only. SUSE provides this
functionality since SLE11 SP3 on fresh installations with UEFI Secure
Boot enabled.
There are two types of trusted users:
First, those who hold the keys. The Platform Key (PK) allows almost everything. The Key Exchange Key (KEK) allows all a PK can except changing the PK.
Second, anyone with physical access to the machine. A user with physical access can reboot the machine, and configure UEFI.
UEFI offers two types of variables to fulfill the needs of those users:
The first is the so-called “Authenticated Variables”, which can be updated from both within the boot process (the so-called Boot Services Environment) and the running OS, but only when the new value of the variable is signed with the same key that the old value of the variable was signed with. And they can only be appended to or changed to a value with a higher serial number.
The second is the so-called “Boot Services Only
Variables”. These variables are accessible to any code that
runs during the boot process. After the boot process ends and before
the OS starts, the boot loader must call the
ExitBootServices
call. After that, these variables
are no longer accessible, and the OS cannot touch them.
The various UEFI key lists are of the first type, as this allows online updating, adding, and blacklisting of keys, drivers, and firmware fingerprints. It is the second type of variable, the “Boot Services Only Variable”, that helps to implement Secure Boot, in a matter that is both secure and open source friendly, and thus compatible with GPLv3.
SUSE starts with shim
—a small and
simple EFI boot loader—which was originally developed by
Fedora. It is signed by a certificate signed by the SUSE KEK and a
Microsoft-issued certificate, based on which KEKs are available in the
UEFI key database on the system.
This allows shim
to load and execute.
shim
then goes on to verify that the boot
loader it wants to load is trusted.
In a default situation shim
will use an
independent SUSE certificate embedded in its body. In addition,
shim
will allow to “enroll”
additional keys, overriding the default SUSE key. In the following, we
call them “Machine Owner Keys” or MOKs for short.
Next the boot loader will verify and then boot the kernel, and the kernel will do the same on the modules.
If the user (“machine owner”) wants to replace any
components of the boot process, Machine Owner Keys (MOKs) are to be
used. The mokutils
tool will help with signing
components and managing MOKs.
The enrollment process begins with rebooting the machine and
interrupting the boot process (for example, pressing a key) when
shim
loads. shim
will
then go into enrollment mode, allowing the user to replace the default
SUSE key with keys from a file on the boot partition. If the user
chooses to do so, shim
will then calculate a
hash of that file and put the result in a “Boot Services
Only” variable. This allows shim
to
detect any change of the file made outside of Boot Services and thus
avoid tampering with the list of user-approved MOKs.
All of this happens during boot time—only verified code is executing now. Therefore, only a user present at the console can use the machine owner's set of keys. It cannot be malware or a hacker with remote access to the OS because hackers or malware can only change the file, but not the hash stored in the “Boot Services Only” variable.
The boot loader, after having been loaded and verified by
shim
, will call back to
shim
when it wants to verify the
kernel—to avoid duplication of the verification code.
Shim
will use the same list of MOKs for this
and tell the boot loader whether it can load the kernel.
This way, you can install your own kernel or boot loader. It is only
necessary to install a new set of keys and authorize them by being
physically present during the first reboot. Because MOKs are a list and
not just a single MOK, you can make shim
trust
keys from several different vendors, allowing dual- and multi-boot from
the boot loader.
The following is based on http://en.opensuse.org/openSUSE:UEFI#Booting_a_custom_kernel.
Secure Boot does not prevent you from using a self-compiled kernel. You must sign it with your own certificate and make that certificate known to the firmware or MOK.
Create a custom X.509 key and certificate used for signing:
openssl req -new -x509 -newkey rsa:2048 -keyout key.asc \ -out cert.pem -nodes -days 666 -subj "/CN=$USER/"
For more information about creating certificates, see http://en.opensuse.org/openSUSE:UEFI_Image_File_Sign_Tools#Create_Your_Own_Certificate.
Package the key and the certificate as a PKCS#12 structure:
openssl pkcs12 -export -inkey key.asc -in cert.pem \ -name kernel_cert -out cert.p12
Generate an NSS database for use with pesign
:
certutil -d . -N
Import the key and the certificate contained in PKCS#12 into the NSS database:
pk12util -d . -i cert.p12
“Bless” the kernel with the new signature using
pesign
:
pesign -n . -c kernel_cert -i arch/x86/boot/bzImage \ -o vmlinuz.signed -s
List the signatures on the kernel image:
pesign -n . -S -i vmlinuz.signed
At that point, you can install the kernel in
/boot
as usual. Because the kernel now has a
custom signature the certificate used for signing needs to be imported
into the UEFI firmware or MOK.
Convert the certificate to the DER format for import into the firmware or MOK:
openssl x509 -in cert.pem -outform der -out cert.der
Copy the certificate to the ESP for easier access:
sudo cp cert.der /boot/efi/
Use mokutil
to launch the MOK list automatically.
Import the certificate to MOK:
mokutil --root-pw --import cert.der
The --root-pw
option enables usage of the
root
user directly.
Check the list of certificates that are prepared to be enrolled:
mokutil --list-new
Reboot the system; shim
should launch
MokManager. You need to enter the root
password to
confirm the import of the certificate to the MOK list.
Check if the newly imported key was enrolled:
mokutil --list-enrolled
Alternatively, this is the procedure if you want to launch MOK manually:
Reboot
In the GRUB 2 menu press the 'c
' key.
Type:
chainloader $efibootdir/MokManager.efi boot
Select
.
Navigate to the cert.der
file and press
Enter.
Follow the instructions to enroll the key. Normally this should be
pressing '0
' and then 'y
' to
confirm.
Alternatively, the firmware menu may provide ways to add a new key to the Signature Database.
There is no support for adding non-inbox drivers (that is, drivers that do not come with SLE) during installation with Secure Boot enabled. The signing key used for SolidDriver/PLDP is not trusted by default.
It is possible to install third party drivers during installation with Secure Boot enabled in two different ways. In both cases:
Add the needed keys to the firmware database via firmware/system management tools before the installation. This option depends on the specific hardware you are using. Consult your hardware vendor for more information.
Use a bootable driver ISO from https://drivers.suse.com/ or your hardware vendor to enroll the needed keys in the MOK list at first boot.
To use the bootable driver ISO to enroll the driver keys to the MOK list, follow these steps:
Burn the ISO image above to an empty CD/DVD medium.
Start the installation using the new CD/DVD medium, having the standard SUSE Linux Enterprise media at hand or a URL to a network installation server.
If doing a network installation, enter the URL of the network
installation source on the boot command line using the
install=
option.
If doing installation from optical media, the installer will first boot from the driver kit and then ask to insert the first disk of the SUSE Linux Enterprise product.
An initrd containing updated drivers will be used for installation.
For more information, see https://drivers.suse.com/doc/Usage/Secure_Boot_Certificate.html.
When booting in Secure Boot mode, the following features apply:
Installation to UEFI default boot loader location, a mechanism to keep or restore the EFI boot entry.
Reboot via UEFI.
Xen hypervisor will boot with UEFI when there is no legacy BIOS to fall back to.
UEFI IPv6 PXE boot support.
UEFI get videomode support, the kernel can retrieve video mode from UEFI to configure KMS mode with the same parameters.
UEFI booting from USB devices is supported.
When booting in Secure Boot mode, the following limitations apply:
To ensure that Secure Boot cannot be easily circumvented, some kernel features are disabled when running under Secure Boot.
Boot loader, kernel, and kernel modules must be signed.
Kexec and Kdump are disabled.
Hibernation (suspend on disk) is disabled.
Access to /dev/kmem
and
/dev/mem
is not possible, not even as root user.
Access to the I/O port is not possible, not even as root user. All X11 graphical drivers must use a kernel driver.
PCI BAR access through sysfs is not possible.
custom_method
in ACPI is not available.
debugfs for asus-wmi module is not available.
the acpi_rsdp
parameter does not have any effect on
the kernel.
http://www.uefi.org —UEFI home page where you can find the current UEFI specifications.
Blog posts by Olaf Kirch and Vojtěch Pavlík (the chapter above is heavily based on these posts):
http://en.opensuse.org/openSUSE:UEFI —UEFI with openSUSE.
This chapter starts with information about various software packages,
the virtual consoles and the keyboard layout. We talk about software
components like bash
,
cron
and
logrotate
, because they were
changed or enhanced during the last release cycles. Even if they are
small or considered of minor importance, users should change their
default behavior, because these components are often closely coupled
with the system. The chapter concludes with a section about language and
country-specific settings (I18N and L10N).
The programs bash
,
cron
,
logrotate
,
locate
,
ulimit
and
free
are very important for
system administrators and many users. Man pages and info pages are two
useful sources of information about commands, but both are not always
available. GNU Emacs is a popular and very configurable text editor.
bash
Package and /etc/profile
#Bash is the default system shell. When used as a login shell, it reads several initialization files. Bash processes them in the order they appear in this list:
/etc/profile
~/.profile
/etc/bash.bashrc
~/.bashrc
Make custom settings in ~/.profile
or
~/.bashrc
. To ensure the correct processing of these
files, it is necessary to copy the basic settings from
/etc/skel/.profile
or
/etc/skel/.bashrc
into the home directory of the
user. It is recommended to copy the settings from
/etc/skel
after an update. Execute the following
shell commands to prevent the loss of personal adjustments:
mv ~/.bashrc ~/.bashrc.old cp /etc/skel/.bashrc ~/.bashrc mv ~/.profile ~/.profile.old cp /etc/skel/.profile ~/.profile
Then copy personal adjustments back from the *.old
files.
If you want to run commands regularly and automatically in the background at predefined times, cron is the tool to use. cron is driven by specially formatted time tables. Some come with the system and users can write their own tables if needed.
The cron tables are located in /var/spool/cron/tabs
.
/etc/crontab
serves as a systemwide cron table.
Enter the user name to run the command directly after the time table and
before the command. In Example 15.1, “Entry in /etc/crontab”,
root
is entered.
Package-specific tables, located in /etc/cron.d
,
have the same format. See the cron
man page
(man cron
).
1-59/5 * * * * root test -x /usr/sbin/atrun && /usr/sbin/atrun
You cannot edit /etc/crontab
by calling the command
crontab -e
. This file must be loaded directly into an
editor, then modified and saved.
A number of packages install shell scripts to the directories
/etc/cron.hourly
,
/etc/cron.daily
,
/etc/cron.weekly
and
/etc/cron.monthly
, whose execution is controlled by
/usr/lib/cron/run-crons
.
/usr/lib/cron/run-crons
is run every 15 minutes from
the main table (/etc/crontab
). This guarantees that
processes that may have been neglected can be run at the proper time.
To run the hourly
, daily
or
other periodic maintenance scripts at custom times, remove the time stamp
files regularly using /etc/crontab
entries (see
Example 15.2, “/etc/crontab: Remove Time Stamp Files”, which removes the
hourly
one before every full hour, the
daily
one once a day at 2:14 a.m., etc.).
59 * * * * root rm -f /var/spool/cron/lastrun/cron.hourly 14 2 * * * root rm -f /var/spool/cron/lastrun/cron.daily 29 2 * * 6 root rm -f /var/spool/cron/lastrun/cron.weekly 44 2 1 * * root rm -f /var/spool/cron/lastrun/cron.monthly
Or you can set DAILY_TIME
in
/etc/sysconfig/cron
to the time at which
cron.daily
should start. The setting of
MAX_NOT_RUN
ensures that the daily tasks get triggered
to run, even if the user did not turn on the computer at the specified
DAILY_TIME
for a longer time. The maximum
value of MAX_NOT_RUN
is 14 days.
The daily system maintenance jobs are distributed to various scripts for
reasons of clarity. They are contained in the package
aaa_base
.
/etc/cron.daily
contains, for example, the
components suse.de-backup-rpmdb
,
suse.de-clean-tmp
or
suse.de-cron-local
.
To avoid the mail-flood caused by cron status messages, the default value
of SEND_MAIL_ON_NO_ERROR
in
/etc/sysconfig/cron
is set to
"no
" for new installations. Even with this setting to
"no
", cron data output will still be sent to the
MAILTO
address, as documented in the cron man page.
In the update case it is recommended to set these values according to your needs.
There are several system services (daemons) that,
along with the kernel itself, regularly record the system status and
specific events onto log files. This way, the administrator can regularly
check the status of the system at a certain point in time, recognize
errors or faulty functions and troubleshoot them with pinpoint precision.
These log files are normally stored in /var/log
as
specified by FHS and grow on a daily basis. The
logrotate
package helps control the growth of
these files.
Configure logrotate with the file /etc/logrotate.conf
. In particular, the
include
specification primarily configures the
additional files to read. Programs that produce log files install
individual configuration files in /etc/logrotate.d
.
For example, such files ship with the packages
apache2
(/etc/logrotate.d/apache2
) and
syslog-service
(/etc/logrotate.d/syslog
).
# see "man logrotate" for details # rotate log files weekly weekly # keep 4 weeks worth of backlogs rotate 4 # create new (empty) log files after rotating old ones create # uncomment this if you want your log files compressed #compress # RPM packages drop log rotation information into this directory include /etc/logrotate.d # no packages own lastlog or wtmp - we'll rotate them here #/var/log/wtmp { # monthly # create 0664 root utmp # rotate 1 #} # system-specific logs may be also be configured here.
logrotate is controlled through cron and is called daily by
/etc/cron.daily/logrotate
.
locate
, a command for quickly finding files, is not
included in the standard scope of installed software. If desired, install
the package mlocate
, the successor of the
package findutils-locate
. The updatedb process
is started automatically every night or about 15 minutes after
booting the system.
With the ulimit
(user limits)
command, it is possible to set limits for the use of system resources and
to have these displayed. ulimit
is especially useful
for limiting available memory for applications. With this, an application
can be prevented from co-opting too much of the system resources and
slowing or even hanging up the operating system.
ulimit
can be used with various options. To limit
memory usage, use the options listed in
Table 15.1, “ulimit
: Setting Resources for the User”.
ulimit
: Setting Resources for the User #
|
The maximum resident set size |
|
The maximum amount of virtual memory available to the shell |
|
The maximum size of the stack |
|
The maximum size of core files created |
|
All current limits are reported |
Systemwide default entries are set in /etc/profile
.
Editing this file directly is not recommended, because changes will be
overwritten during system upgrades. To customize systemwide profile
settings, use /etc/profile.local
. Per-user settings
should be made in
~USER/.bashrc
.
# Limits maximum resident set size (physical memory): ulimit -m 98304 # Limits of virtual memory: ulimit -v 98304
Memory allocations must be specified in KB. For more detailed
information, see man bash
.
ulimit
Support
Not all shells support ulimit
directives. PAM (for
instance, pam_limits
) offers comprehensive adjustment
possibilities as an alternative to ulimit
.
The free
command displays the total amount of free and
used physical memory and swap space in the system, plus the buffers
and cache consumed by the kernel. The concept of available
RAM dates back to before the days of unified memory
management. The slogan free memory is bad memory
applies well to Linux. As a result, Linux has always made the effort to
balance out caches without actually allowing free or unused memory.
The kernel does not have direct knowledge of any applications or user
data. Instead, it manages applications and user data in a page
cache. If memory runs short, parts of it are written to the
swap partition or to files, from which they can initially be read with
the help of the mmap
command (see man
mmap
).
The kernel also contains other caches, such as the slab
cache, where the caches used for network access are stored.
This may explain the differences between the counters in
/proc/meminfo
. Most, but not all, of them can be
accessed via /proc/slabinfo
.
However, if your goal is to find out how much RAM is currently being
used, find this information in /proc/meminfo
.
For some GNU applications (such as tar), the man pages are no longer
maintained. For these commands, use the --help
option to
get a quick overview of the info pages, which provide more in-depth
instructions. Info
is GNU's hypertext system. Read an introduction to this system by
entering info
info
. Info pages can be
viewed with Emacs by entering emacs
-f
info
or directly in a console with info
. You
can also use tkinfo, xinfo or the help system to view info pages.
man
Command #
To read a man page enter man
man_page. If a man page with the same name
exists in different sections, they will all be listed with the
corresponding section numbers. Select the one to display. If you do not
enter a section number within a few seconds, the first man page will be
displayed.
If you want to change this to the default system behavior, set
MAN_POSIXLY_CORRECT=1
in a shell initialization file
such as ~/.bashrc
.
GNU Emacs is a complex work environment. The following sections cover the configuration files processed when GNU Emacs is started. More information is available at http://www.gnu.org/software/emacs/.
On start-up, Emacs reads several files containing the settings of the
user, system administrator and distributor for customization or
preconfiguration. The initialization file ~/.emacs
is
installed to the home directories of the individual users from
/etc/skel
. .emacs
, in turn,
reads the file /etc/skel/.gnu-emacs
. To customize the
program, copy .gnu-emacs
to the home directory (with
cp /etc/skel/.gnu-emacs ~/.gnu-emacs
) and make the
desired settings there.
.gnu-emacs
defines the file
~/.gnu-emacs-custom
as
custom-file
. If users make settings with the
customize
options in Emacs, the settings are saved to
~/.gnu-emacs-custom
.
With openSUSE Leap, the emacs
package installs the file site-start.el
in the
directory /usr/share/emacs/site-lisp
. The file
site-start.el
is loaded before the initialization
file ~/.emacs
. Among other things,
site-start.el
ensures that special configuration
files distributed with Emacs add-on packages, such as
psgml
, are loaded automatically.
Configuration files of this type are located in
/usr/share/emacs/site-lisp
, too, and always begin
with suse-start-
. The local system administrator can
specify systemwide settings in default.el
.
More information about these files is available in the Emacs info file
under Init File:
info:/emacs/InitFile
. Information about how to disable
the loading of these files (if necessary) is also provided at this
location.
The components of Emacs are divided into several packages:
The base package emacs
.
emacs-x11
(usually installed):
the program with X11 support.
emacs-nox
: the program
without X11 support.
emacs-info
: online
documentation in info format.
emacs-el
: the uncompiled
library files in Emacs Lisp. These are not required at runtime.
Numerous add-on packages can be installed if needed:
emacs-auctex
(LaTeX),
psgml
(SGML and XML),
gnuserv
(client and server
operation) and others.
Linux is a multiuser and multitasking system. The advantages of these features can be appreciated even on a stand-alone PC system. In text mode, there are six virtual consoles available. Switch between them using Alt–F1 through Alt–F6. The seventh console is reserved for X and the tenth console shows kernel messages.
To switch to a console from X without shutting it down, use Ctrl–Alt–F1 to Ctrl–Alt–F6. To return to X, press Alt–F7.
To standardize the keyboard mapping of programs, changes were made to the following files:
/etc/inputrc /etc/X11/Xmodmap /etc/skel/.emacs /etc/skel/.gnu-emacs /etc/skel/.vimrc /etc/csh.cshrc /etc/termcap /usr/share/terminfo/x/xterm /usr/share/X11/app-defaults/XTerm /usr/share/emacs/VERSION/site-lisp/term/*.el
These changes only affect applications that use
terminfo
entries or whose configuration files are
changed directly (vi
, emacs
, etc.).
Applications not shipped with the system should be adapted to these
defaults.
Under X, the compose key (multikey) can be enabled as explained in
/etc/X11/Xmodmap
.
Further settings are possible using the X Keyboard Extension (XKB). This extension is also used by the desktop environment GNOME (gswitchit).
Information about XKB is available in the documents listed in
/usr/share/doc/packages/xkeyboard-config
(part of
the xkeyboard-config
package).
The system is, to a very large extent, internationalized and can be modified for local needs. Internationalization (I18N) allows specific localizations (L10N). The abbreviations I18N and L10N are derived from the first and last letters of the words and, in between, the number of letters omitted.
Settings are made with LC_
variables defined in
the file /etc/sysconfig/language
. This refers not
only to native language support, but also to the
categories Messages (Language), Character
Set, Sort Order, Time and
Date, Numbers and
Money. Each of these categories can be defined
directly with its own variable or indirectly with a master variable in the
file language
(see the locale
man
page).
RC_LC_MESSAGES
,
RC_LC_CTYPE
,
RC_LC_COLLATE
,
RC_LC_TIME
,
RC_LC_NUMERIC
,
RC_LC_MONETARY
These variables are passed to the shell without the
RC_
prefix and represent the listed
categories. The shell profiles concerned are listed below. The current
setting can be shown with the command locale
.
RC_LC_ALL
This variable, if set, overwrites the values of the variables already mentioned.
RC_LANG
If none of the previous variables are set, this is the fallback. By
default, only RC_LANG
is set. This makes it
easier for users to enter their own values.
ROOT_USES_LANG
A yes
or no
variable. If set to
no
, root
always works in the POSIX environment.
The variables can be set with the YaST sysconfig editor. The value of such a variable contains the language code, country code, encoding and modifier. The individual components are connected by special characters:
LANG=<language>[[_<COUNTRY>].<Encoding>[@<Modifier>]]
You should always set the language and country codes together. Language settings follow the standard ISO 639 available at http://www.evertype.com/standards/iso639/iso639-en.html and http://www.loc.gov/standards/iso639-2/. Country codes are listed in ISO 3166, see http://en.wikipedia.org/wiki/ISO_3166.
It only makes sense to set values for which usable description files can
be found in /usr/lib/locale
. Additional description
files can be created from the files in
/usr/share/i18n
using the command
localedef
. The description files are part of the
glibc-i18ndata
package. A description file for
en_US.UTF-8
(for English and United States) can be
created with:
localedef -i en_US -f UTF-8 en_US.UTF-8
LANG=en_US.UTF-8
This is the default setting if American English is selected during installation. If you selected another language, that language is enabled but still with UTF-8 as the character encoding.
LANG=en_US.ISO-8859-1
This sets the language to English, country to United States and the
character set to ISO-8859-1
. This character set
does not support the Euro sign, but it can be useful sometimes for
programs that have not been updated to support
UTF-8
. The string defining the charset
(ISO-8859-1
in this case) is then evaluated by
programs like Emacs.
LANG=en_IE@euro
The above example explicitly includes the Euro sign in a language setting. This setting is obsolete now, as UTF-8 also covers the Euro symbol. It is only useful if an application supports ISO-8859-15 and not UTF-8.
Changes to /etc/sysconfig/language
are activated by
the following process chain:
For the Bash: /etc/profile
reads
/etc/profile.d/lang.sh
which, in turn, analyzes
/etc/sysconfig/language
.
For tcsh: At login, /etc/csh.login
reads
/etc/profile.d/lang.csh
which, in turn, analyzes
/etc/sysconfig/language
.
This ensures that any changes to
/etc/sysconfig/language
are available at the next
login to the respective shell, without having to manually activate
them.
Users can override the system defaults by editing their
~/.bashrc
accordingly. For instance, if you do not
want to use the system-wide en_US
for program
messages, include LC_MESSAGES=es_ES
so that
messages are displayed in Spanish instead.
~/.i18n
#
If you are not satisfied with locale system defaults, change the settings
in ~/.i18n
according to the Bash scripting syntax.
Entries in ~/.i18n
override system defaults from
/etc/sysconfig/language
. Use the same variable names
but without the RC_
name space prefixes. For example,
use LANG
instead of RC_LANG
:
LANG=cs_CZ.UTF-8 LC_COLLATE=C
Files in the category Messages are, as a rule, only
stored in the corresponding language directory (like
en
) to have a fallback. If you set
LANG
to en_US
and the message
file in /usr/share/locale/en_US/LC_MESSAGES
does not
exist, it falls back to
/usr/share/locale/en/LC_MESSAGES
.
A fallback chain can also be defined, for example, for Breton to French or for Galician to Spanish to Portuguese:
LANGUAGE="br_FR:fr_FR"
LANGUAGE="gl_ES:es_ES:pt_PT"
If desired, use the Norwegian variants Nynorsk and Bokmål instead (with
additional fallback to no
):
LANG="nn_NO"
LANGUAGE="nn_NO:nb_NO:no"
or
LANG="nb_NO"
LANGUAGE="nb_NO:nn_NO:no"
Note that in Norwegian, LC_TIME
is also treated
differently.
One problem that can arise is a separator used to delimit groups of
digits not being recognized properly. This occurs if
LANG
is set to only a two-letter language code
like de
, but the definition file glibc uses is located
in /usr/share/lib/de_DE/LC_NUMERIC
. Thus
LC_NUMERIC
must be set to
de_DE
to make the separator definition visible to the
system.
The GNU C Library Reference Manual, Chapter
“Locales and Internationalization”. It is included in
glibc-info
. The package is available from the
SUSE Linux Enterprise SDK.
The SDK is a module for SUSE Linux Enterprise and is available via an online channel from
the SUSE Customer Center. Alternatively, go to http://download.suse.com/, search for SUSE Linux Enterprise
Software Development Kit
and download it from there.
Refer to Book “Start-Up”, Chapter 10 “Installing Add-On Products” for details.
Markus Kuhn, UTF-8 and Unicode FAQ for Unix/Linux, currently at http://www.cl.cam.ac.uk/~mgk25/unicode.html.
Unicode-HOWTO by Bruno Haible, available at http://tldp.org/HOWTO/Unicode-HOWTO-1.html.
udev
#/dev
Directoryuevents
and udev
udev
Daemonudev
Rulesudev
The kernel can add or remove almost any device in a running system.
Changes in the device state (whether a device is plugged in or removed)
need to be propagated to user space. Devices need to be configured as soon
as they are plugged in and recognized. Users of a certain device need to
be informed about any changes in this device's recognized state.
udev
provides the needed
infrastructure to dynamically maintain the device node files and symbolic
links in the /dev
directory.
udev
rules provide a way to plug
external tools into the kernel device event processing. This enables you
to customize udev
device handling
by, for example, adding certain scripts to execute as part of kernel
device handling, or request and import additional data to evaluate during
device handling.
/dev
Directory #
The device nodes in the /dev
directory provide
access to the corresponding kernel devices. With
udev
, the
/dev
directory reflects the current state of the
kernel. Every kernel device has one corresponding device file. If a
device is disconnected from the system, the device node is removed.
The content of the /dev
directory is kept on a
temporary file system and all files are rendered at every system
start-up. Manually created or modified files do not, by design, survive a
reboot. Static files and directories that should always be in the
/dev
directory regardless of the state of the
corresponding kernel device can be created with systemd-tmpfiles. The
configuration files are found in /usr/lib/tmpfiles.d/
and
/etc/tmpfiles.d/
; for more information, see the
systemd-tmpfiles(8)
man page.
uevents
and udev
#
The required device information is exported by the
sysfs
file system. For every
device the kernel has detected and initialized, a directory with the
device name is created. It contains attribute files with device-specific
properties.
Every time a device is added or removed, the kernel sends a uevent to
notify udev
of the change. The
udev
daemon reads and parses all
provided rules from the /etc/udev/rules.d/*.rules
files once at start-up and keeps them in memory. If rules files are
changed, added or removed, the daemon can reload the in-memory
representation of all rules with the command udevadm control
reload_rules
. For more details on
udev
rules and their syntax,
refer to Section 16.6, “Influencing Kernel Device Event Handling with udev
Rules”.
Every received event is matched against the set of provides rules. The
rules can add or change event environment keys, request a specific name
for the device node to create, add symbolic links pointing to the node or
add programs to run after the device node is created. The driver core
uevents
are received from a
kernel netlink socket.
The kernel bus drivers probe for devices. For every detected device, the
kernel creates an internal device structure while the driver core sends a
uevent to the udev
daemon. Bus
devices identify themselves by a specially-formatted ID, which tells what
kind of device it is. Usually these IDs consist of vendor and product ID
and other subsystem-specific values. Every bus has its own scheme for
these IDs, called MODALIAS
. The kernel takes the device
information, composes a MODALIAS
ID string from it and
sends that string along with the event. For a USB mouse, it looks like
this:
MODALIAS=usb:v046DpC03Ed2000dc00dsc00dp00ic03isc01ip02
Every device driver carries a list of known aliases for devices it can
handle. The list is contained in the kernel module file itself. The
program depmod reads the ID lists and creates the file
modules.alias
in the kernel's
/lib/modules
directory for all currently available
modules. With this infrastructure, module loading is as easy as calling
modprobe
for every event that carries a
MODALIAS
key. If modprobe $MODALIAS
is called, it matches the device alias composed for the device with the
aliases provided by the modules. If a matching entry is found, that
module is loaded. All this is automatically triggered by
udev
.
All device events happening during the boot process before the
udev
daemon is running are lost,
because the infrastructure to handle these events resides on the root
file system and is not available at that time. To cover that loss, the
kernel provides a uevent
file located in the device
directory of every device in the
sysfs
file system. By writing
add
to that file, the kernel resends the same event as
the one lost during boot. A simple loop over all
uevent
files in /sys
triggers
all events again to create the device nodes and perform device setup.
As an example, a USB mouse present during boot may not be initialized by
the early boot logic, because the driver is not available at that time.
The event for the device discovery was lost and failed to find a kernel
module for the device. Instead of manually searching for possibly
connected devices, udev
requests
all device events from the kernel after the root file system is
available, so the event for the USB mouse device runs again. Now it finds
the kernel module on the mounted root file system and the USB mouse can
be initialized.
From user space, there is no visible difference between a device coldplug sequence and a device discovery during runtime. In both cases, the same rules are used to match and the same configured programs are run.
udev
Daemon #
The program udevadm monitor
can be used to visualize
the driver core events and the timing of the
udev
event processes.
UEVENT[1185238505.276660] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1 (usb) UDEV [1185238505.279198] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1 (usb) UEVENT[1185238505.279527] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0 (usb) UDEV [1185238505.285573] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0 (usb) UEVENT[1185238505.298878] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10 (input) UDEV [1185238505.305026] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10 (input) UEVENT[1185238505.305442] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/mouse2 (input) UEVENT[1185238505.306440] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/event4 (input) UDEV [1185238505.325384] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/event4 (input) UDEV [1185238505.342257] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/mouse2 (input)
The UEVENT
lines show the events the kernel has sent
over netlink. The UDEV
lines show the finished
udev
event handlers. The timing
is printed in microseconds. The time between UEVENT
and UDEV
is the time
udev
took to process this event
or the udev
daemon has delayed
its execution to synchronize this event with related and already running
events. For example, events for hard disk partitions always wait for the
main disk device event to finish, because the partition events may rely
on the data that the main disk event has queried from the hardware.
udevadm monitor --env
shows the complete event
environment:
ACTION=add DEVPATH=/devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10 SUBSYSTEM=input SEQNUM=1181 NAME="Logitech USB-PS/2 Optical Mouse" PHYS="usb-0000:00:1d.2-1/input0" UNIQ="" EV=7 KEY=70000 0 0 0 0 REL=103 MODALIAS=input:b0003v046DpC03Ee0110-e0,1,2,k110,111,112,r0,1,8,amlsfw
udev
also sends messages to
syslog. The default syslog priority that controls which messages are sent
to syslog is specified in the
udev
configuration file
/etc/udev/udev.conf
. The log priority of the running
daemon can be changed with udevadm control
log_priority=
level/number.
udev
Rules #
A udev
rule can match any
property the kernel adds to the event itself or any information that the
kernel exports to sysfs
. The rule can also request
additional information from external programs. Every event is matched
against all provided rules. All rules are located in the
/etc/udev/rules.d
directory.
Every line in the rules file contains at least one key value pair. There
are two kinds of keys, match and assignment keys. If all match keys match
their values, the rule is applied and the assignment keys are assigned
the specified value. A matching rule may specify the name of the device
node, add symbolic links pointing to the node or run a specified program
as part of the event handling. If no matching rule is found, the default
device node name is used to create the device node. Detailed information
about the rule syntax and the provided keys to match or import data are
described in the udev
man page.
The following example rules provide a basic introduction to
udev
rule syntax. The example
rules are all taken from the
udev
default rule set that is
located under
/etc/udev/rules.d/50-udev-default.rules
.
udev
Rules ## console KERNEL=="console", MODE="0600", OPTIONS="last_rule" # serial devices KERNEL=="ttyUSB*", ATTRS{product}=="[Pp]alm*Handheld*", SYMLINK+="pilot" # printer SUBSYSTEM=="usb", KERNEL=="lp*", NAME="usb/%k", SYMLINK+="usb%k", GROUP="lp" # kernel firmware loader SUBSYSTEM=="firmware", ACTION=="add", RUN+="firmware.sh"
The console
rule consists of three keys: one
match key (KERNEL
) and two assign keys
(MODE
, OPTIONS
). The
KERNEL
match rule searches the device list for any
items of the type console
. Only exact matches are
valid and trigger this rule to be executed. The MODE
key assigns special permissions to the device node, in this case, read
and write permissions to the owner of this device only. The
OPTIONS
key makes this rule the last rule to be
applied to any device of this type. Any later rule matching this
particular device type does not have any effect.
The serial devices
rule is not available in
50-udev-default.rules
anymore, but it is still worth
considering. It consists of two match keys (KERNEL
and
ATTRS
) and one assign key
(SYMLINK
). The KERNEL
key searches
for all devices of the ttyUSB
type. Using the
*
wild card, this key matches several of these
devices. The second match key, ATTRS
, checks whether
the product
attribute file in
sysfs
for any ttyUSB
device
contains a certain string. The assign key (SYMLINK
)
triggers the addition of a symbolic link to this device under
/dev/pilot
. The operator used in this key
(+=
) tells
udev
to additionally perform
this action, even if previous or later rules add other symbolic links. As
this rule contains two match keys, it is only applied if both conditions
are met.
The printer
rule deals with USB printers and
contains two match keys which must both apply to get the entire rule
applied (SUBSYSTEM
and KERNEL
).
Three assign keys deal with the naming for this device type
(NAME
), the creation of symbolic device links
(SYMLINK
) and the group membership for this device
type (GROUP
). Using the *
wild card
in the KERNEL
key makes it match several
lp
printer devices. Substitutions are used in both,
the NAME
and the SYMLINK
keys to
extend these strings by the internal device name. For example, the
symbolic link to the first lp
USB printer would read
/dev/usblp0
.
The kernel firmware loader
rule makes
udev
load additional firmware by
an external helper script during runtime. The
SUBSYSTEM
match key searches for the
firmware
subsystem. The ACTION
key
checks whether any device belonging to the firmware
subsystem has been added. The RUN+=
key triggers the
execution of the firmware.sh
script to locate the
firmware that is to be loaded.
Some general characteristics are common to all rules:
Each rule consists of one or more key value pairs separated by a comma.
A key's operation is determined by the operator.
udev
rules support several
different operators.
Each given value must be enclosed by quotation marks.
Each line of the rules file represents one rule. If a rule is longer
than one line, use \
to join the different lines as
you would do in shell syntax.
udev
rules support a
shell-style pattern that matches the *
,
?
, and []
patterns.
udev
rules support
substitutions.
udev
Rules #Creating keys you can choose from several different operators, depending on the type of key you want to create. Match keys will normally be used to find a value that either matches or explicitly mismatches the search value. Match keys contain either of the following operators:
==
Compare for equality. If the key contains a search pattern, all results matching this pattern are valid.
!=
Compare for non-equality. If the key contains a search pattern, all results matching this pattern are valid.
Any of the following operators can be used with assign keys:
=
Assign a value to a key. If the key previously consisted of a list of values, the key resets and only the single value is assigned.
+=
Add a value to a key that contains a list of entries.
:=
Assign a final value. Disallow any later change by later rules.
udev
Rules #
udev
rules support the use of
placeholders and substitutions. Use them in a similar fashion as you
would do in any other scripts. The following substitutions can be used
with udev
rules:
%r
, $root
The device directory, /dev
by default.
%p
, $devpath
The value of DEVPATH
.
%k
, $kernel
The value of KERNEL
or the internal device name.
%n
, $number
The device number.
%N
, $tempnode
The temporary name of the device file.
%M
, $major
The major number of the device.
%m
, $minor
The minor number of the device.
%s{attribute}
,
$attr{attribute}
The value of a sysfs
attribute (specified by
attribute).
%E{variable}
,
$attr{variable}
The value of an environment variable (specified by variable).
%c
, $result
The output of PROGRAM
.
%%
The %
character.
$$
The $
character.
udev
Match Keys #
Match keys describe conditions that must be met before a
udev
rule can be applied. The
following match keys are available:
ACTION
The name of the event action, for example, add
or
remove
when adding or removing a device.
DEVPATH
The device path of the event device, for example,
DEVPATH=/bus/pci/drivers/ipw3945
to search for all
events related to the ipw3945 driver.
KERNEL
The internal (kernel) name of the event device.
SUBSYSTEM
The subsystem of the event device, for example,
SUBSYSTEM=usb
for all events related to USB
devices.
ATTR{filename}
sysfs
attributes of the
event device. To match a string contained in the
vendor
attribute file name, you could use
ATTR{vendor}=="On[sS]tream"
, for example.
KERNELS
Let udev
search the device
path upwards for a matching device name.
SUBSYSTEMS
Let udev
search the device
path upwards for a matching device subsystem name.
DRIVERS
Let udev
search the device
path upwards for a matching device driver name.
ATTRS{filename}
Let udev
search the device
path upwards for a device with matching
sysfs
attribute values.
ENV{key}
The value of an environment variable, for example,
ENV{ID_BUS}="ieee1394
to search for all events
related to the FireWire bus ID.
PROGRAM
Let udev
execute an external
program. To be successful, the program must return with exit code
zero. The program's output, printed to STDOUT, is available to the
RESULT
key.
RESULT
Match the output string of the last PROGRAM
call.
Either include this key in the same rule as the
PROGRAM
key or in a later one.
udev
Assign Keys #
In contrast to the match keys described above, assign keys do not
describe conditions that must be met. They assign values, names and
actions to the device nodes maintained by
udev
.
NAME
The name of the device node to be created. After a rule has set a
node name, all other rules with a NAME
key for
this node are ignored.
SYMLINK
The name of a symbolic link related to the node to be created. Multiple matching rules can add symbolic links to be created with the device node. You can also specify multiple symbolic links for one node in one rule using the space character to separate the symbolic link names.
OWNER, GROUP, MODE
The permissions for the new device node. Values specified here overwrite anything that has been compiled in.
ATTR{key}
Specify a value to be written to a
sysfs
attribute of the
event device. If the ==
operator is used, this key
is also used to match against the value of a
sysfs
attribute.
ENV{key}
Tell udev
to export a
variable to the environment. If the ==
operator is
used, this key is also used to match against an environment variable.
RUN
Tell udev
to add a program
to the list of programs to be executed for this device. Keep in mind
to restrict this to very short tasks to avoid blocking further events
for this device.
LABEL
Add a label where a GOTO
can jump to.
GOTO
Tell udev
to skip a number
of rules and continue with the one that carries the label referenced
by the GOTO
key.
IMPORT{type}
Load variables into the event environment such as the output of an
external program. udev
imports variables of several different types. If no type is
specified, udev
tries to
determine the type itself based on the executable bit of the file
permissions.
program
tells
udev
to execute an
external program and import its output.
file
tells
udev
to import a text
file.
parent
tells
udev
to import the stored
keys from the parent device.
WAIT_FOR_SYSFS
Tells udev
to wait for the
specified sysfs
file to
be created for a certain device. For example,
WAIT_FOR_SYSFS="ioerr_cnt"
informs
udev
to wait until the
ioerr_cnt
file has been created.
OPTIONS
The OPTION
key may have several possible values:
last_rule
tells
udev
to ignore all later
rules.
ignore_device
tells
udev
to ignore this event
completely.
ignore_remove
tells
udev
to ignore all later
remove events for the device.
all_partitions
tells
udev
to create device
nodes for all available partitions on a block device.
The dynamic device directory and the
udev
rules infrastructure make
it possible to provide stable names for all disk
devices—regardless of their order of recognition or the
connection used for the device. Every appropriate block device the kernel
creates is examined by tools with special knowledge about certain buses,
drive types or file systems. Along with the dynamic kernel-provided
device node name, udev
maintains
classes of persistent symbolic links pointing to the device:
/dev/disk |-- by-id | |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B -> ../../sda | |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part1 -> ../../sda1 | |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part6 -> ../../sda6 | |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part7 -> ../../sda7 | |-- usb-Generic_STORAGE_DEVICE_02773 -> ../../sdd | `-- usb-Generic_STORAGE_DEVICE_02773-part1 -> ../../sdd1 |-- by-label | |-- Photos -> ../../sdd1 | |-- SUSE10 -> ../../sda7 | `-- devel -> ../../sda6 |-- by-path | |-- pci-0000:00:1f.2-scsi-0:0:0:0 -> ../../sda | |-- pci-0000:00:1f.2-scsi-0:0:0:0-part1 -> ../../sda1 | |-- pci-0000:00:1f.2-scsi-0:0:0:0-part6 -> ../../sda6 | |-- pci-0000:00:1f.2-scsi-0:0:0:0-part7 -> ../../sda7 | |-- pci-0000:00:1f.2-scsi-1:0:0:0 -> ../../sr0 | |-- usb-02773:0:0:2 -> ../../sdd | |-- usb-02773:0:0:2-part1 -> ../../sdd1 `-- by-uuid |-- 159a47a4-e6e6-40be-a757-a629991479ae -> ../../sda7 |-- 3e999973-00c9-4917-9442-b7633bd95b9e -> ../../sda6 `-- 4210-8F8C -> ../../sdd1
udev
#/sys/*
Virtual file system provided by the Linux kernel, exporting all
currently known devices. This information is used by
udev
to create device nodes
in /dev
/dev/*
Dynamically created device nodes and static content created with
systemd-tmpfiles; for more information, see the
systemd-tmpfiles(8)
man page.
The following files and directories contain the crucial elements of the
udev
infrastructure:
/etc/udev/udev.conf
Main udev
configuration file.
/etc/udev/rules.d/*
udev
event matching rules.
/usr/lib/tmpfiles.d/
and
/etc/tmpfiles.d/
Responsible for static /dev
content.
/usr/lib/udev/*
Helper programs called from
udev
rules.
For more information about the
udev
infrastructure, refer to
the following man pages:
udev
General information about
udev
, keys, rules and other
important configuration issues.
udevadm
udevadm
can be used to control the runtime behavior
of udev
, request kernel
events, manage the event queue and provide simple debugging
mechanisms.
udevd
Information about the udev
event managing daemon.
Configuring a network client requires detailed knowledge about services provided over the network (such as printing or LDAP, for example). To make it easier to configure such services on a network client, the “service location protocol” (SLP) was developed. SLP makes the availability and configuration data of selected services known to all clients in the local network. Applications that support SLP can use this information to be configured automatically.
The NTP (network time protocol) mechanism is a protocol for synchronizing the system time over the network. First, a machine can obtain the time from a server that is a reliable time source. Second, a machine can itself act as a time source for other computers in the network. The goal is twofold—maintaining the absolute time and synchronizing the system time of all machines within a network.
DNS (domain name system) is needed to resolve the domain names and host
names into IP addresses. In this way, the IP address 192.168.2.100 is
assigned to the host name jupiter
, for example.
Before setting up your own name server, read the general information
about DNS in Section 13.3, “Name Resolution”. The following
configuration examples refer to BIND, the default DNS server.
The purpose of the Dynamic Host Configuration Protocol (DHCP) is to assign network settings centrally (from a server) rather than configuring them locally on every workstation. A host configured to use DHCP does not have control over its own static address. It is enabled to configure itself completely and automatically according to directions from the server. If you use the NetworkManager on the client side, you do not need to configure the client at all. This is useful if you have changing environments and only one interface active at a time. Never use NetworkManager on a machine that runs a DHCP server.
Using Samba, a Unix machine can be configured as a file and print server for Mac OS X, Windows, and OS/2 machines. Samba has developed into a fully-fledged and rather complex product. Configure Samba with YaST, or by editing the configuration file manually.
Distributing and sharing file systems over a network is a common task in corporate environments. The well-proven network file system (NFS) works with NIS, the yellow pages protocol. For a more secure protocol that works with LDAP and Kerberos, check NFSv4. Combined with pNFS, you can eliminate performance bottlenecks.
NFS with NIS makes a network transparent to the user. With NFS, it is possible to distribute arbitrary file systems over the network. With an appropriate setup, users always find themselves in the same environment regardless of the terminal they currently use.
autofs
is a program that automatically mounts
specified directories on an on-demand basis. It is based on a kernel
module for high efficiency, and can manage both local directories and
network shares. These automatic mount points are mounted only when they
are accessed, and unmounted after a ceratin period of inactivity. This
on-demand behavior saves bandwidth and results in better performance than
static mounts managed by /etc/fstab
. While
autofs
is a control script,
automount
is the command (daemon) that does the actual
auto-mounting.
According to the survey from http://www.netcraft.com/, the Apache HTTP Server (Apache) is the world's most widely-used Web server. Developed by the Apache Software Foundation (http://www.apache.org/), it is available for most operating systems. openSUSE® Leap includes Apache version 2.4. In this chapter, learn how to install, configure and set up a Web server; how to use SSL, CGI, and additional modules; and how to troubleshoot Apache.
Using the YaST
module, you can configure your machine to function as an FTP (File Transfer Protocol) server. Anonymous and/or authenticated users can connect to your machine and download files using the FTP protocol. Depending on the configuration, they can also upload files to the FTP server. YaST uses vsftpd (Very Secure FTP Daemon).Squid is a widely-used proxy cache for Linux and Unix platforms. This means that it stores requested Internet objects, such as data on a Web or FTP server, on a machine that is closer to the requesting workstation than the server. It may be set up in multiple hierarchies to assure optimal response times and low bandwidth usage, even in modes that are transparent for the end user. Additional software like squidGuard may be used to filter Web contents.
Configuring a network client requires detailed knowledge about services provided over the network (such as printing or LDAP, for example). To make it easier to configure such services on a network client, the “service location protocol” (SLP) was developed. SLP makes the availability and configuration data of selected services known to all clients in the local network. Applications that support SLP can use this information to be configured automatically.
openSUSE® Leap supports installation using installation sources provided with SLP and contains many system services with integrated support for SLP. You can use SLP to provide networked clients with central functions, such as an installation server, file server, or print server on your system. Services that offer SLP support include cupsd, login, ntp, openldap2, postfix, rpasswd, rsyncd, saned, sshd (via fish), vnc, and ypserv.
All packages necessary to use SLP services on a network client are
installed by default. However, if you want to provide
services via SLP, check that the openslp-server
package is installed.
slptool
#
slptool
is a command line tool to query and register
SLP services. The query functions are useful for diagnostic purposes. The
most important slptool
subcommands are listed below.
slptool
--help
lists all
available options and functions.
List all service types available on the network.
tux >
slptool findsrvtypes
service:install.suse:nfs
service:install.suse:ftp
service:install.suse:http
service:install.suse:smb
service:ssh
service:fish
service:YaST.installation.suse:vnc
service:smtp
service:domain
service:management-software.IBM:hardware-management-console
service:rsync
service:ntp
service:ypserv
List all servers providing service type
tux >
slptool findsrvs service:ntp
service:ntp://ntp.example.com:123,57810
service:ntp://ntp2.example.com:123,57810
List attributes for service type on host
tux >
slptool findattrs service:ntp://ntp.example.com
(owner=tux),(email=tux@example.com)
Registers service type on host with an optional list of attributes
slptool register service:ntp://ntp.example.com:57810 \ "(owner=tux),(email=tux@example.com)"
De-registers service type on host
slptool deregister service:ntp://ntp.example.com
For more information run slptool --help
.
To provide SLP services, the SLP daemon
(slpd
) must be running. Like most
system services in openSUSE Leap,
slpd
is controlled by means of a
separate start script. After the installation, the daemon is inactive by
default. To activate it for the current session, run sudo
systemctl start slpd
. If
slpd
should be activated on
system start-up, run sudo systemctl enable
slpd
.
Many applications in openSUSE Leap have integrated SLP support via the
libslp
library. If a service has not been compiled with
SLP support, use one of the following methods to make it available via SLP:
/etc/slp.reg.d
Create a separate registration file for each new service. The following example registers a scanner service:
## Register a saned service on this system ## en means english language ## 65535 disables the timeout, so the service registration does ## not need refreshes service:scanner.sane://$HOSTNAME:6566,en,65535 watch-port-tcp=6566 description=SANE scanner daemon
The most important line in this file is the service
URL, which begins with service:
. This
contains the service type (scanner.sane
) and the
address under which the service is available on the server.
$HOSTNAME is automatically replaced with
the full host name. The name of the TCP port on which the relevant
service can be found follows, separated by a colon. Then enter the
language in which the service should appear and the duration of
registration in seconds. These should be separated from the service
URL by commas. Set the value for the duration of registration between
0
and 65535
.
0
prevents registration. 65535
removes all restrictions.
The registration file also contains the two variables
watch-port-tcp
and
description
.
watch-port-tcp
links the SLP service
announcement to whether the relevant service is active by having
slpd
check the status of the
service. The second variable contains a more precise description of
the service that is displayed in suitable browsers.
Some services brokered by YaST, such as an installation server or YOU server, perform this registration automatically when you activate SLP in the module dialogs. YaST then creates registration files for these services.
/etc/slp.reg
The only difference between this method and the procedure with
/etc/slp.reg.d
is that all services are grouped
within a central file.
slptool
If a service needs to be registered dynamically without the need of
configuration files, use the slptool command line utility. The same
utility can also be used to de-register an existing service offering
without restarting slpd
. See
Section 17.1, “The SLP Front-End slptool
” for details.
Announcing the installation data via SLP within your network makes the network installation much easier, since the installation data such as IP address of the server or the path to the installation media are automatically required via SLP query.
RFC 2608 generally deals with the definition of SLP. RFC 2609 deals with the syntax of the service URLs used in greater detail and RFC 2610 deals with DHCP via SLP.
The home page of the OpenSLP project.
/usr/share/doc/packages/openslp
This directory contains the documentation for SLP coming with the
openslp-server
package, including a
README.SUSE
containing the openSUSE Leap
details, the RFCs, and two introductory HTML documents. Programmers
who want to use the SLP functions will find more information in the
Programmers Guide that is included in the
openslp-devel
package that
is provided with the SUSE Software Development Kit.
The NTP (network time protocol) mechanism is a protocol for synchronizing the system time over the network. First, a machine can obtain the time from a server that is a reliable time source. Second, a machine can itself act as a time source for other computers in the network. The goal is twofold—maintaining the absolute time and synchronizing the system time of all machines within a network.
Maintaining an exact system time is important in many situations. The built-in hardware clock does often not meet the requirements of applications such as databases or clusters. Manual correction of the system time would lead to severe problems because, for example, a backward leap can cause malfunction of critical applications. Within a network, it is usually necessary to synchronize the system time of all machines, but manual time adjustment is a bad approach. NTP provides a mechanism to solve these problems. The NTP service continuously adjusts the system time with the help of reliable time servers in the network. It further enables the management of local reference clocks, such as radio-controlled clocks.
To enable time synchronization by means of active directory, follow the instructions found at Book “Security Guide”, Chapter 6 “Active Directory Support”, Section 6.3 “Configuring a Linux Client for Active Directory”, Joining an AD Domain.
The NTP daemon (ntpd
) coming with
the ntp
package is preset to use the local
computer clock as a time reference. Using the hardware clock, however,
only serves as a fallback for cases where no time source of better
precision is available. YaST simplifies the configuration of an NTP
client.
The YaST NTP client configuration (ntpd
and the server to query on
the tab.
Select ntpd
daemon.
Select ntpd
automatically when the
system is booted. This setting is strongly recommended.
The servers and other time sources for the client to query are listed in the lower part of the
tab. Modify this list as needed with , , and . provides the possibility to view the log files of your client.Click
to add a new source of time information. In the following dialog, select the type of source with which the time synchronization should be made. The following options are available:
In the pull-down Figure 18.1, “YaST: NTP Server”), determine whether to set up
time synchronization using a time server from your local network
( ) or an Internet-based time
server that takes care of your time zone ( ). For a local time server, click
to start an SLP query for available time
servers in your network. Select the most suitable time server from
the list of search results and exit the dialog with
. For a public time server, select your country
(time zone) and a suitable server from the list under then exit the dialog with .
In the main dialog, test the availability of the selected server with
. allows you to
specify additional options for
ntpd
.
Using Figure 18.2, “Advanced NTP Configuration: Security Settings”). The options correspond to
the restrict
clauses in
/etc/ntp.conf
. For example, nomodify
notrap noquery
disallows the server to modify NTP settings
of your computer and to use the trap facility (a remote event logging
feature) of your NTP daemon. Using these restrictions is recommended
for servers out of your control (for example, on the Internet).
Refer to /usr/share/doc/packages/ntp-doc
(part
of the ntp-doc
package) for detailed
information.
A peer is a machine to which a symmetric relationship is established: it acts both as a time server and as a client. To use a peer in the same network instead of a server, enter the address of the system. The rest of the dialog is identical to the
dialog.
To use a radio clock in your system for the time synchronization,
enter the clock type, unit number, device name, and other options in
this dialog. Click /usr/share/doc/packages/ntp-doc/refclock.html
.
Time information and queries can also be transmitted by broadcast in the network. In this dialog, enter the address to which such broadcasts should be sent. Do not activate broadcasting unless you have a reliable time source like a radio controlled clock.
If you want your client to receive its information via broadcast, enter the address from which the respective packets should be accepted in this fields.
In the Figure 18.2, “Advanced NTP Configuration: Security Settings”), determine whether
ntpd
should be started in a
chroot jail. By default, is activated. This increases the security in the event of
an attack over ntpd
, as it
prevents the attacker from compromising the entire system.
increases the security of your system by disallowing remote computers to view and modify NTP settings of your computer and to use the trap facility for remote event logging. After being enabled, these restrictions apply to all remote computers, unless you override the access control options for individual computers in the list of time sources in the tab. For all other remote computers, only querying for local time is allowed.
Enable
if SuSEFirewall2 is active (which it is by default). If you leave the port closed, it is not possible to establish a connection to the time server.
The easiest way to use a time server in the network is to set server
parameters. For example, if a time server called
ntp.example.com
is reachable from the network, add
its name to the file /etc/ntp.conf
by adding the
following line:
server ntp.example.com
To add more time servers, insert additional lines with the keyword
server
. After initializing
ntpd
with the command
systemctl start ntp
, it takes about one hour
until the time is stabilized and the drift file for correcting the local
computer clock is created. With the drift file, the systematic error of
the hardware clock can be computed as soon as the computer is powered on.
The correction is used immediately, resulting in a higher stability of
the system time.
There are two possible ways to use the NTP mechanism as a client: First, the client can query the time from a known server in regular intervals. With many clients, this approach can cause a high load on the server. Second, the client can wait for NTP broadcasts sent out by broadcast time servers in the network. This approach has the disadvantage that the quality of the server is unknown and a server sending out wrong information can cause severe problems.
If the time is obtained via broadcast, you do not need the server name.
In this case, enter the line broadcastclient
in the
configuration file /etc/ntp.conf
. To use one or more
known time servers exclusively, enter their names in the line starting
with servers
.
If the system boots without network connection,
ntpd
starts up, but it cannot
resolve DNS names of the time servers set in the configuration file. This
can happen if you use NetworkManager with an encrypted Wi-Fi.
If you want ntpd
to resolve DNS
names at runtime, you must set the dynamic
option. Then, when the network is establish some time after booting,
ntpd
looks up the names again and
can reach the time servers to get the time.
Manually edit /etc/ntp.conf
and add
dynamic
to one or more
server
entries:
server ntp.example.com dynamic
Or use YaST and proceed as follows:
In YaST click
› .Select the server you want to configure. Then click
.
Activate the dynamic
. Separate it with a space, if there are
already other options entered.
Click
to close the edit dialog. Repeat the previous step to change all servers as wanted.Finally click
to save the settings.
The software package ntpd
contains drivers for connecting local reference clocks. A list of
supported clocks is available in the
ntp-doc
package in the file
/usr/share/doc/packages/ntp-doc/refclock.html
. Every
driver is associated with a number. In NTP, the actual configuration
takes place by means of pseudo IP addresses. The clocks are entered in
the file /etc/ntp.conf
as though they existed in the
network. For this purpose, they are assigned special IP addresses in the
form
127.127.t.u
.
Here, t stands for the type of the clock and
determines which driver is used and u for the
unit, which determines the interface used.
Normally, the individual drivers have special parameters that describe
configuration details. The file
/usr/share/doc/packages/ntp-doc/drivers/driverNN.html
(where NN is the number of the driver)
provides information about the particular type of clock. For example, the
“type 8” clock (radio clock over serial interface)
requires an additional mode that specifies the clock more precisely. The
Conrad DCF77 receiver module, for example, has mode 5. To use
this clock as a preferred reference, specify the keyword
prefer
. The complete server
line
for a Conrad DCF77 receiver module would be:
server 127.127.8.0 mode 5 prefer
Other clocks follow the same pattern. Following the installation of the
ntp-doc
package, the
documentation for NTP is available in the directory
/usr/share/doc/packages/ntp-doc
. The file
/usr/share/doc/packages/ntp-doc/refclock.html
provides links to the driver pages describing the driver parameters.
DNS (domain name system) is needed to resolve the domain names and host
names into IP addresses. In this way, the IP address 192.168.2.100 is
assigned to the host name jupiter
, for example.
Before setting up your own name server, read the general information
about DNS in Section 13.3, “Name Resolution”. The following
configuration examples refer to BIND, the default DNS server.
The domain name space is divided into regions called zones. For
instance, if you have example.com
, you have
the example
section (or zone) of the
com
domain.
The DNS server is a server that maintains the name and IP information for a domain. You can have a primary DNS server for master zone, a secondary server for slave zone, or a slave server without any zones for caching.
The master zone includes all hosts from your network and a DNS server master zone stores up-to-date records for all the hosts in your domain.
A slave zone is a copy of the master zone. The slave zone DNS server obtains its zone data with zone transfer operations from its master server. The slave zone DNS server responds authoritatively for the zone as long as it has valid (not expired) zone data. If the slave cannot obtain a new copy of the zone data, it stops responding for the zone.
Forwarders are DNS servers to which your DNS server should send
queries it cannot answer. To enable different configuration sources in
one configuration, netconfig
is used (see also
man 8 netconfig
).
The record is information about name and IP address. Supported records and their syntax are described in BIND documentation. Some special records are:
An NS record tells name servers which machines are in charge of a given domain zone.
The MX (mail exchange) records describe the machines to contact for directing mail across the Internet.
SOA (Start of Authority) record is the first record in a zone file. The SOA record is used when using DNS to synchronize data between multiple computers.
To install a DNS server, start YaST and select
› . Choose › and select . Confirm the installation of the dependent packages to finish the installation process.Use the YaST DNS module to configure a DNS server for the local network. When starting the module for the first time, a wizard starts, prompting you to make a few decisions concerning administration of the server. Completing this initial setup produces a basic server configuration. Use the expert mode to deal with more advanced configuration tasks, such as setting up ACLs, logging, TSIG keys, and other options.
The wizard consists of three steps or dialogs. At the appropriate places in the dialogs, you can enter the expert configuration mode.
When starting the module for the first time, the Figure 19.1, “DNS Server Installation: Forwarder Settings”, opens. The allows to set the following options:
dialog, shown in
auto
, but here you can
either set interface names or select from the two special policy
names STATIC
and
STATIC_FALLBACK
.
In
, specify which service to use: , , or .
For more information about all these settings, see man 8
netconfig
.
Forwarders are DNS servers to which your DNS server sends queries it cannot answer itself. Enter their IP address and click
.
The Section 19.6, “Zone Files”. For a new zone, provide a name for
it in . To add a reverse zone, the name must
end in .in-addr.arpa
. Finally, select the
(master, slave, or forward). See
Figure 19.2, “DNS Server Installation: DNS Zones”. Click to configure other settings of an existing zone. To remove
a zone, click .
In the final dialog, you can open the DNS port in the firewall by clicking Figure 19.3, “DNS Server Installation: Finish Wizard”.
. Then decide whether to start the DNS server when booting ( or ). You can also activate LDAP support. SeeAfter starting the module, YaST opens a window displaying several configuration options. Completing it results in a DNS server configuration with the basic functions in place:
Under
, define whether the DNS server should be started when the booting the system or manually. To start the DNS server immediately, click . To stop the DNS server, click . To save the current settings, select . You can open the DNS port in the firewall with and modify the firewall settings with .By selecting
, the zone files are managed by an LDAP database. Any changes to zone data written to the LDAP database are picked up by the DNS server as soon as it is restarted or prompted to reload its configuration.
If your local DNS server cannot answer a request, it tries to forward
the request to a man 8
netconfig
.
In this section, set basic server options. From the
menu, select the desired item then specify the value in the corresponding text box. Include the new entry by selecting .To set what the DNS server should log and how, select
. Under , specify where the DNS server should write the log data. Use the system-wide log by selecting or specify a different file by selecting . In the latter case, additionally specify a name, the maximum file size in megabytes and the number of log file versions to store.Further options are available under every query to be logged, in which case the log file could grow extremely large. For this reason, it is not a good idea to enable this option for other than debugging purposes. To log the data traffic during zone updates between DHCP and DNS server, enable . To log the data traffic during a zone transfer from master to slave, enable . See Figure 19.4, “DNS Server: Logging”.
. Enabling causesUse this dialog to define ACLs (access control lists) to enforce access restrictions. After providing a distinct name under
, specify an IP address (with or without netmask) under in the following fashion:{ 192.168.1/24; }
The syntax of the configuration file requires that the address ends with a semicolon and is put into curly braces.
The main purpose of TSIGs (transaction signatures) is to secure communications between DHCP and DNS servers. They are described in Section 19.8, “Secure Transactions”.
To generate a TSIG key, enter a distinctive name in the field labeled
and specify the file where the key should be stored ( ). Confirm your choices with .To use a previously created key, leave the
field blank and select the file where it is stored under . After that, confirm with .To add a slave zone, select
, choose the zone type , write the name of the new zone, and click .In the
sub-dialog under , specify the master from which the slave should pull its data. To limit access to the server, select one of the ACLs from the list.
To add a master zone, select example.com
that points to hosts in a subnet
192.168.1.0/24
, you should also add a reverse zone
for the IP-address range covered. By definition, this should be named
1.168.192.in-addr.arpa
.
To edit a master zone, select
, select the master zone from the table, and click . The dialog consists of several pages: (the one opened first), , , , and .The basic dialog, shown in Figure 19.5, “DNS Server: Zone Editor (Basics)”, lets you define settings for dynamic DNS and access options for zone transfers to clients and slave name servers. To permit the dynamic updating of zones, select as well as the corresponding TSIG key. The key must have been defined before the update action starts. To enable zone transfers, select the corresponding ACLs. ACLs must have been defined already.
In the
dialog, select whether to enable zone transfers. Use the listed ACLs to define who can download zones.The Figure 19.6, “DNS Server: Zone Editor (NS Records)”.
dialog allows you to define alternative name servers for the zones specified. Make sure that your own name server is included in the list. To add a record, enter its name under then confirm with . SeeTo add a mail server for the current zone to the existing list, enter the corresponding address and priority value. After doing so, confirm by selecting Figure 19.7, “DNS Server: Zone Editor (MX Records)”.
. SeeThis page allows you to create SOA (start of authority) records. For an explanation of the individual options, refer to Example 19.6, “The /var/lib/named/example.com.zone File”. Changing SOA records is not supported for dynamic zones managed via LDAP.
This dialog manages name resolution. In A
record. is for reverse zones. It is the
opposite of an A
record, for example:
hostname.example.com. IN A 192.168.0.1 1.0.168.192.in-addr.arpa IN PTR hostname.example.com.
To add a reverse zone, follow this procedure:
Start
› › .If you have not added a master forward zone, add it and
it.In the
tab, fill the corresponding and , then add the record with and confirm with . If YaST complains about a non-existing record for a name server, add it in the tab.Back in the
window, add a reverse master zone.the reverse zone, and in the tab, you can see the record type. Add the corresponding and , then click and confirm with .
Add a name server record if needed.
After adding a forward zone, go back to the main menu and select the reverse zone for editing. There in the tab
activate the check box and select your forward zone. That way, all changes to the forward zone are automatically updated in the reverse zone.
On a openSUSE® Leap system, the name server BIND (Berkeley
Internet Name Domain) comes preconfigured so it can be started
right after installation without any problems. If you already have a
functioning Internet connection and have entered
127.0.0.1
as the name server
address for localhost
in
/etc/resolv.conf
, you normally already have a
working name resolution without needing to know the DNS of the provider.
BIND carries out name resolution via the root name server, a notably
slower process. Normally, the DNS of the provider should be entered with
its IP address in the configuration file
/etc/named.conf
under
forwarders
to ensure effective and secure name
resolution. If this works so far, the name server runs as a pure
caching-only name server. Only when you configure
its own zones it becomes a proper DNS. Find a simple example documented
in /usr/share/doc/packages/bind/config
.
Depending on the type of Internet connection or the network connection,
the name server information can automatically be adapted to the current
conditions. To do this, set the
NETCONFIG_DNS_POLICY
variable in the
/etc/sysconfig/network/config
file to
auto
.
However, do not set up an official domain until one is assigned to you by the responsible institution. Even if you have your own domain and it is managed by the provider, you are better off not using it, because BIND would otherwise not forward requests for this domain. The Web server at the provider, for example, would not be accessible for this domain.
To start the name server, enter the command systemctl start
named
as
root
. Check with
systemctl status named
whether named (as the
name server process is called) has been started successfully. Test the
name server immediately on the local system with the
host
or dig
programs, which should
return localhost
as the
default server with the address
127.0.0.1
. If this is not the
case, /etc/resolv.conf
probably contains an
incorrect name server entry or the file does not exist. For the
first test, enter
host
127.0.0.1
, which should
always work. If you get an error message, use systemctl status
named
to see whether the server is actually running. If
the name server does not start or behaves unexpectedly, check the output
of journalctl -e
.
To use the name server of the provider (or one already running on your
network) as the forwarder, enter the corresponding IP address or
addresses in the options
section under
forwarders
. The addresses included in
Example 19.1, “Forwarding Options in named.conf” are examples only. Adjust these entries to
your own setup.
options { directory "/var/lib/named"; forwarders { 10.11.12.13; 10.11.12.14; }; listen-on { 127.0.0.1; 192.168.1.116; }; allow-query { 127/8; 192.168/16 }; notify no; };
The options
entry is followed by entries for the
zone, localhost
, and
0.0.127.in-addr.arpa
. The type
hint
entry under “.” should always be present. The
corresponding files do not need to be modified and should work as they
are. Also make sure that each entry is closed with a “;” and
that the curly braces are in the correct places. After changing the
configuration file /etc/named.conf
or the zone
files, tell BIND to reread them with systemctl reload
named
. Achieve the same by stopping and restarting the
name server with systemctl restart named
. Stop
the server at any time by entering systemctl stop
named
.
All the settings for the BIND name server itself are stored in the
/etc/named.conf
file. However, the zone data for the
domains to handle (consisting of the host names, IP addresses, and so on)
are stored in separate files in the /var/lib/named
directory. The details of this are described later.
/etc/named.conf
is roughly divided into two areas.
One is the options
section for general settings
and the other consists of zone
entries for the
individual domains. A logging
section and
acl
(access control list) entries are optional.
Comment lines begin with a #
sign or
//
. A minimal /etc/named.conf
is
shown in Example 19.2, “A Basic /etc/named.conf”.
options { directory "/var/lib/named"; forwarders { 10.0.0.1; }; notify no; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; zone "." in { type hint; file "root.hint"; };
Specifies the directory in which BIND can find the files containing
the zone data. Usually, this is /var/lib/named
.
Specifies the name servers (mostly of the provider) to which DNS
requests should be forwarded if they cannot be resolved directly.
Replace ip-address with an IP address like
192.168.1.116
.
Causes DNS requests to be forwarded before an attempt is made to
resolve them via the root name servers. Instead of
forward first
, forward
only
can be written to have all requests forwarded and
none sent to the root name servers. This makes sense for firewall
configurations.
Tells BIND on which network interfaces and port to accept client
queries. port 53
does not need to be specified
explicitly, because 53
is the default port. Enter
127.0.0.1
to permit requests from the local host.
If you omit this entry entirely, all interfaces are used by default.
Tells BIND on which port it should listen for IPv6 client requests.
The only alternative to any
is
none
. As far as IPv6 is concerned, the server only
accepts wild card addresses.
This entry is necessary if a firewall is blocking outgoing DNS requests. This tells BIND to post requests externally from port 53 and not from any of the high ports above 1024.
Tells BIND which port to use for IPv6 queries.
Defines the networks from which clients can post DNS requests.
Replace net with address information like
192.168.2.0/24
. The
/24
at the end is an abbreviated expression
for the netmask (in this case
255.255.255.0
).
Controls which hosts can request zone transfers. In the example, such
requests are completely denied with ! *
.
Without this entry, zone transfers can be requested from anywhere
without restrictions.
In the absence of this entry, BIND generates several lines of statistical information per hour in the system's journal. Set it to 0 to suppress these statistics completely or set an interval in minutes.
This option defines at which time intervals BIND clears its cache. This triggers an entry in the system's journal each time it occurs. The time specification is in minutes. The default is 60 minutes.
BIND regularly searches the network interfaces for new or nonexistent
interfaces. If this value is set to 0
, this
is not done and BIND only listens at the interfaces detected at
start-up. Otherwise, the interval can be defined in minutes. The
default is sixty minutes.
no
prevents other name servers from being informed
when changes are made to the zone data or when the name server is
restarted.
For a list of available options, read the manual page man 5
named.conf
.
What, how, and where logging takes place can be extensively configured in BIND. Normally, the default settings should be sufficient. Example 19.3, “Entry to Disable Logging”, shows the simplest form of such an entry and completely suppresses any logging.
logging { category default { null; }; };
zone "example.com" in { type master; file "example.com.zone"; notify no; };
After zone
, specify the name of the domain to
administer (example.com
)
followed by in
and a block of relevant options
enclosed in curly braces, as shown in Example 19.4, “Zone Entry for example.com”.
To define a slave zone, switch the
type
to slave
and specify a
name server that administers this zone as master
(which, in turn, may be a slave of another master), as shown in
Example 19.5, “Zone Entry for example.net”.
zone "example.net" in { type slave; file "slave/example.net.zone"; masters { 10.0.0.1; }; };
The zone options:
By specifying master
, tell BIND that the zone is
handled by the local name server. This assumes that a zone file has
been created in the correct format.
This zone is transferred from another name server. It must be used
together with masters
.
The zone .
of the hint
type is
used to set the root name servers. This zone definition can be left
as is.
example.com.zone
or file
“slave/example.net.zone”;
This entry specifies the file where zone data for the domain is
located. This file is not required for a slave, because this data is
pulled from another name server. To differentiate master and slave
files, use the directory slave
for the slave
files.
This entry is only needed for slave zones. It specifies from which name server the zone file should be transferred.
This option controls external write access, which would allow clients
to make a DNS entry—something not normally desirable for
security reasons. Without this entry, zone updates are not allowed at
all. The above entry achieves the same because ! *
effectively bans any such activity.
Two types of zone files are needed. One assigns IP addresses to host names and the other does the reverse: it supplies a host name for an IP address.
The "."
has an important meaning in the zone files.
If host names are given without a final dot (.
), the
zone is appended. Complete host names specified with a full domain name
must end with a dot (.
) to avoid having the domain
added to it again. A missing or wrongly placed "." is probably the most
frequent cause of name server configuration errors.
The first case to consider is the zone file
example.com.zone
, responsible for the domain
example.com
, shown in
Example 19.6, “The /var/lib/named/example.com.zone File”.
1. $TTL 2D 2. example.com. IN SOA dns root.example.com. ( 3. 2003072441 ; serial 4. 1D ; refresh 5. 2H ; retry 6. 1W ; expiry 7. 2D ) ; minimum 8. 9. IN NS dns 10. IN MX 10 mail 11. 12. gate IN A 192.168.5.1 13. IN A 10.0.0.1 14. dns IN A 192.168.1.116 15. mail IN A 192.168.3.108 16. jupiter IN A 192.168.2.100 17. venus IN A 192.168.2.101 18. saturn IN A 192.168.2.102 19. mercury IN A 192.168.2.103 20. ntp IN CNAME dns 21. dns6 IN A6 0 2002:c0a8:174::
$TTL
defines the default time to live that
should apply to all the entries in this file. In this example, entries
are valid for a period of two days (2 D
).
This is where the SOA (start of authority) control record begins:
The name of the domain to administer is
example.com
in the first position. This
ends with "."
, because otherwise the zone would
be appended a second time. Alternatively, @
can
be entered here, in which case the zone would be extracted from the
corresponding entry in /etc/named.conf
.
After IN SOA
is the name of the name server
in charge as master for this zone. The name is expanded from
dns
to dns.example.com
, because
it does not end with a "."
.
An e-mail address of the person in charge of this name server
follows. Because the @
sign already has a special
meaning, "."
is entered here instead. For
root@example.com
the entry must read
root.example.com.
. The
"."
must be included at the end to prevent the
zone from being added.
The (
includes all lines up to
)
into the SOA record.
The serial number
is an arbitrary number that
is increased each time this file is changed. It is needed to inform
the secondary name servers (slave servers) of changes. For this, a 10
digit number of the date and run number, written as YYYYMMDDNN, has
become the customary format.
The refresh rate
specifies the time interval
at which the secondary name servers verify the zone serial
number
. In this case, one day.
The retry rate
specifies the time interval at
which a secondary name server, in case of error, attempts to contact
the primary server again. Here, two hours.
The expiration time
specifies the time frame
after which a secondary name server discards the cached data if it has
not regained contact to the primary server. Here, a week.
The last entry in the SOA record specifies the negative
caching TTL
—the time for which results of
unresolved DNS queries from other servers may be cached.
The IN NS
specifies the name server
responsible for this domain. dns
is extended
to dns.example.com
because it does not end with
a "."
. There can be several lines like
this—one for the primary and one for each secondary name
server. If notify
is not set to
no
in /etc/named.conf
, all the
name servers listed here are informed of the changes made to the zone
data.
The MX record specifies the mail server that accepts, processes, and
forwards e-mails for the domain
example.com
. In
this example, this is the host
mail.example.com
. The number
in front of the host name is the preference value. If there are
multiple MX entries, the mail server with the smallest value is taken
first and, if mail delivery to this server fails, an attempt is made
with the next higher value.
These are the actual address records where one or more IP addresses
are assigned to host names. The names are listed here without a
"."
because they do not include their domain, so
example.com
is
added to all of them. Two IP addresses are assigned to the host
gate
, as it has two network cards.
Wherever the host address is a traditional one (IPv4), the record is
marked with A
. If the address is an IPv6 address,
the entry is marked with AAAA
.
The IPv6 record has a slightly different syntax than IPv4. Because of the fragmentation possibility, it is necessary to provide information about missed bits before the address. To fill up the IPv6 address with the needed number of “0”, add two colons at the correct place in the address.
pluto AAAA 2345:00C1:CA11::1234:5678:9ABC:DEF0 pluto AAAA 2345:00D2:DA11::1234:5678:9ABC:DEF0
The alias ntp
can be used to address
dns
(CNAME
means
canonical name).
The pseudo domain in-addr.arpa
is used for the reverse
lookup of IP addresses into host names. It is appended to the network
part of the address in reverse notation. So
192.168
is resolved into
168.192.in-addr.arpa
. See
Example 19.7, “Reverse Lookup”.
1. $TTL 2D 2. 168.192.in-addr.arpa. IN SOA dns.example.com. root.example.com. ( 3. 2003072441 ; serial 4. 1D ; refresh 5. 2H ; retry 6. 1W ; expiry 7. 2D ) ; minimum 8. 9. IN NS dns.example.com. 10. 11. 1.5 IN PTR gate.example.com. 12. 100.3 IN PTR www.example.com. 13. 253.2 IN PTR cups.example.com.
$TTL defines the standard TTL that applies to all entries here.
The configuration file should activate reverse lookup for the network
192.168
. Given
that the zone is called 168.192.in-addr.arpa
,
it should not be added to the host names. Therefore, all host names
are entered in their complete form—with their domain and with
a "."
at the end. The remaining entries correspond
to those described for the previous
example.com
example.
See the previous example for
example.com
.
Again this line specifies the name server responsible for this zone.
This time, however, the name is entered in its complete form with the
domain and a "."
at the end.
These are the pointer records hinting at the IP addresses on the
respective hosts. Only the last part of the IP address is entered at
the beginning of the line, without the "."
at the
end. Appending the zone to this (without the
.in-addr.arpa
) results in the complete IP
address in reverse order.
Normally, zone transfers between different versions of BIND should be possible without any problems.
The term dynamic update refers to operations by
which entries in the zone files of a master server are added, changed, or
deleted. This mechanism is described in RFC 2136. Dynamic update
is configured individually for each zone entry by adding an optional
allow-update
or
update-policy
rule. Zones to update dynamically
should not be edited by hand.
Transmit the entries to update to the server with the command
nsupdate
. For the exact syntax of this command, check
the manual page for nsupdate (man
8
nsupdate
). For security reasons, any such update should be
performed using TSIG keys as described in Section 19.8, “Secure Transactions”.
Secure transactions can be made with the help of transaction signatures (TSIGs) based on shared secret keys (also called TSIG keys). This section describes how to generate and use such keys.
Secure transactions are needed for communication between different servers and for the dynamic update of zone data. Making the access control dependent on keys is much more secure than merely relying on IP addresses.
Generate a TSIG key with the following command (for details, see
man
dnssec-keygen
):
dnssec-keygen -a hmac-md5 -b 128 -n HOST host1-host2
This creates two files with names similar to these:
Khost1-host2.+157+34265.private Khost1-host2.+157+34265.key
The key itself (a string like
ejIkuCyyGJwwuN3xAteKgg==
) is found in both files. To
use it for transactions, the second file
(Khost1-host2.+157+34265.key
) must be transferred to
the remote host, preferably in a secure way (using scp, for example). On
the remote server, the key must be included in the
/etc/named.conf
file to enable a secure
communication between host1
and
host2
:
key host1-host2 { algorithm hmac-md5; secret "ejIkuCyyGJwwuN3xAteKgg=="; };
/etc/named.conf
Make sure that the permissions of /etc/named.conf
are properly restricted. The default for this file is
0640
, with the owner being
root
and the group
named
. As an alternative, move
the keys to an extra file with specially limited permissions, which is
then included from /etc/named.conf
. To include an
external file, use:
include "filename"
Replace filename
with an absolute path to your file
with keys.
To enable the server host1
to use the key for
host2
(which has the address
10.1.2.3
in this example), the server's
/etc/named.conf
must include the following rule:
server 10.1.2.3 { keys { host1-host2. ;}; };
Analogous entries must be included in the configuration files of
host2
.
Add TSIG keys for any ACLs (access control lists, not to be confused with file system ACLs) that are defined for IP addresses and address ranges to enable transaction security. The corresponding entry could look like this:
allow-update { key host1-host2. ;};
This topic is discussed in more detail in the BIND
Administrator Reference Manual under
update-policy
.
DNSSEC, or DNS security, is described in RFC 2535. The tools available for DNSSEC are discussed in the BIND Manual.
A zone considered secure must have one or several zone keys associated
with it. These are generated with dnssec-keygen
, as
are the host keys. The DSA encryption algorithm is currently used to
generate these keys. The public keys generated should be included in the
corresponding zone file with an $INCLUDE
rule.
With the command dnssec-signzone
, you can create sets
of generated keys (keyset-
files), transfer them to
the parent zone in a secure manner, and sign them. This generates the
files to include for each zone in /etc/named.conf
.
For more information, see the BIND Administrator Reference
Manual from the
bind-doc
package, which is
installed under /usr/share/doc/packages/bind/arm
.
Consider additionally consulting the RFCs referenced by the manual and
the manual pages included with BIND.
/usr/share/doc/packages/bind/README.SUSE
contains
up-to-date information about BIND in openSUSE Leap.
The purpose of the Dynamic Host Configuration Protocol (DHCP) is to assign network settings centrally (from a server) rather than configuring them locally on every workstation. A host configured to use DHCP does not have control over its own static address. It is enabled to configure itself completely and automatically according to directions from the server. If you use the NetworkManager on the client side, you do not need to configure the client at all. This is useful if you have changing environments and only one interface active at a time. Never use NetworkManager on a machine that runs a DHCP server.
One way to configure a DHCP server is to identify each client using the hardware address of its network card (which should be fixed in most cases), then supply that client with identical settings each time it connects to the server. DHCP can also be configured to assign addresses to each relevant client dynamically from an address pool set up for this purpose. In the latter case, the DHCP server tries to assign the same address to the client each time it receives a request, even over extended periods. This works only if the network does not have more clients than addresses.
DHCP makes life easier for system administrators. Any changes, even bigger ones, related to addresses and the network configuration in general can be implemented centrally by editing the server's configuration file. This is much more convenient than reconfiguring numerous workstations. It is also much easier to integrate machines, particularly new machines, into the network, because they can be given an IP address from the pool. Retrieving the appropriate network settings from a DHCP server is especially useful in case of laptops regularly used in different networks.
In this chapter, the DHCP server will run in the same subnet as the
workstations, 192.168.2.0/24
with 192.168.2.1
as
gateway. It has the fixed IP address
192.168.2.254
and serves two
address ranges,
192.168.2.10
to
192.168.2.20
and
192.168.2.100
to
192.168.2.200
.
A DHCP server supplies not only the IP address and the netmask, but also the host name, domain name, gateway, and name server addresses for the client to use. In addition to that, DHCP allows several other parameters to be configured in a centralized way, for example, a time server from which clients may poll the current time or even a print server.
To install a DHCP server, start YaST and select
› . Choose › and select . Confirm the installation of the dependent packages to finish the installation process.The YaST DHCP module can be set up to store the server configuration locally (on the host that runs the DHCP server) or to have its configuration data managed by an LDAP server. If you want to use LDAP, set up your LDAP environment before configuring the DHCP server.
For more information about LDAP, see Book “Security Guide”, Chapter 5 “LDAP—A Directory Service”.
The YaST DHCP module (yast2-dhcp-server
)
allows you to set up your own DHCP server for the local network. The
module can run in wizard mode or expert configuration mode.
When the module is started for the first time, a wizard starts, prompting you to make a few basic decisions concerning server administration. Completing this initial setup produces a very basic server configuration that should function in its essential aspects. The expert mode can be used to deal with more advanced configuration tasks. Proceed as follows:
Select the interface from the list to which the DHCP server should listen and click Figure 20.1, “DHCP Server: Card Selection”.
. After this, select to open the firewall for this interface, and click . SeeUse the check box to determine whether your DHCP settings should be automatically stored by an LDAP server. In the text boxes, provide the network specifics for all clients the DHCP server should manage. These specifics are the domain name, address of a time server, addresses of the primary and secondary name server, addresses of a print and a WINS server (for a mixed network with both Windows and Linux clients), gateway address, and lease time. See Figure 20.2, “DHCP Server: Global Settings”.
Configure how dynamic IP addresses should be assigned to clients. To do so, specify an IP range from which the server can assign addresses to DHCP clients. All these addresses must be covered by the same netmask. Also specify the lease time during which a client may keep its IP address without needing to request an extension of the lease. Optionally, specify the maximum lease time—the period during which the server reserves an IP address for a particular client. See Figure 20.3, “DHCP Server: Dynamic DHCP”.
Define how the DHCP server should be started. Specify whether to start the DHCP server automatically when the system is booted or manually when needed (for example, for testing purposes). Click Figure 20.4, “DHCP Server: Start-Up”.
to complete the configuration of the server. SeeInstead of using dynamic DHCP in the way described in the preceding steps, you can also configure the server to assign addresses in quasi-static fashion. Use the text boxes provided in the lower part to specify a list of the clients to manage in this way. Specifically, provide the Figure 20.5, “DHCP Server: Host Management”.
and the to give to such a client, the , and the (token ring or Ethernet). Modify the list of clients, which is shown in the upper part with , , and . SeeIn addition to the configuration method discussed earlier, there is also an expert configuration mode that allows you to change the DHCP server setup in every detail. Start the expert configuration by clicking Figure 20.4, “DHCP Server: Start-Up”).
in the dialog (seeIn this first dialog, make the existing configuration editable by selecting Figure 20.6, “DHCP Server: Chroot Jail and Declarations”. After selecting , define the type of declaration to add. With , view the log file of the server, configure TSIG key management, and adjust the configuration of the firewall according to the setup of the DHCP server.
. An important feature of the behavior of the DHCP server is its ability to run in a chroot environment or chroot jail, to secure the server host. If the DHCP server should ever be compromised by an outside attack, the attacker will still be behind bars in the chroot jail, which prevents him from touching the rest of the system. The lower part of the dialog displays a tree view with the declarations that have already been defined. Modify these with , , and . Selecting takes you to additional expert dialogs. SeeThe Figure 20.7, “DHCP Server: Selecting a Declaration Type”).
of the DHCP server are made up of several declarations. This dialog lets you set the declaration types , , , , , and . This example shows the selection of a new subnet (seeThis dialog allows you specify a new subnet with its IP address and netmask. In the middle part of the dialog, modify the DHCP server start options for the selected subnet using
, , and . To set up dynamic DNS for the subnet, select .If you chose to configure dynamic DNS in the previous dialog, you can now configure the key management for a secure zone transfer. Selecting Figure 20.10, “DHCP Server: Interface Configuration for Dynamic DNS”).
takes you to another dialog in which to configure the interface for dynamic DNS (seeYou can now activate dynamic DNS for the subnet by selecting Figure 20.8, “DHCP Server: Configuring Subnets”). Selecting again returns to the original expert configuration dialog.
. After doing so, use the drop-down box to activate the TSIG keys for forward and reverse zones, making sure that the keys are the same for the DNS and the DHCP server. With , enable the automatic update and adjustment of the global DHCP server settings according to the dynamic DNS environment. Finally, define which forward and reverse zones should be updated per dynamic DNS, specifying the name of the primary name server for each of the two zones. Selecting returns to the subnet configuration dialog (seeTo define the interfaces the DHCP server should listen to and to adjust the firewall configuration, select Figure 20.11, “DHCP Server: Network Interface and Firewall”), after which you can return to the original dialog by selecting .
› from the expert configuration dialog. From the list of interfaces displayed, select one or more that should be attended by the DHCP server. If clients in all subnets need to be able to communicate with the server and the server host also runs a firewall, adjust the firewall accordingly. To do so, select . YaST then adjusts the rules of SuSEFirewall2 to the new conditions (seeAfter completing all configuration steps, close the dialog with
. The server is now started with its new configuration.
Both the DHCP server and the DHCP clients are available for SUSE Linux Enterprise Server. The
DHCP server available is dhcpd
(published by the Internet Systems Consortium).
On the client side, there is dhcp-client
(also
from ISC) and tools coming with the wicked
package.
By default, the wicked
tools are installed with
the services wickedd-dhcp4
and
wickedd-dhcp6
. Both are
launched automatically on each system boot to watch for a DHCP server.
They do not need a configuration file to do their job and work out of the
box in most standard setups. For more complex situations, use the ISC
dhcp-client
, which is controlled by means of the
configuration files /etc/dhclient.conf
and
/etc/dhclient6.conf
.
The core of any DHCP system is the dynamic host configuration protocol
daemon. This server leases addresses and watches how
they are used, according to the settings defined in the configuration
file /etc/dhcpd.conf
. By changing the parameters and
values in this file, a system administrator can influence the program's
behavior in numerous ways. Look at the basic sample
/etc/dhcpd.conf
file in
Example 20.1, “The Configuration File /etc/dhcpd.conf”.
default-lease-time 600; # 10 minutes max-lease-time 7200; # 2 hours option domain-name "example.com"; option domain-name-servers 192.168.1.116; option broadcast-address 192.168.2.255; option routers 192.168.2.1; option subnet-mask 255.255.255.0; subnet 192.168.2.0 netmask 255.255.255.0 { range 192.168.2.10 192.168.2.20; range 192.168.2.100 192.168.2.200; }
This simple configuration file should be sufficient to get the DHCP server to assign IP addresses in the network. Make sure that a semicolon is inserted at the end of each line, because otherwise dhcpd is not started.
The sample file can be divided into three sections. The first one defines
how many seconds an IP address is leased to a requesting client by
default (default-lease-time
) before it should apply
for renewal. This section also includes a statement of the maximum period
for which a machine may keep an IP address assigned by the DHCP server
without applying for renewal (max-lease-time
).
In the second part, some basic network parameters are defined on a global level:
The line option domain-name
defines the default
domain of your network.
With the entry option domain-name-servers
, specify
up to three values for the DNS servers used to resolve IP addresses
into host names and vice versa. Ideally, configure a name server on
your machine or somewhere else in your network before setting up DHCP.
That name server should also define a host name for each dynamic
address and vice versa