libvirtDepending on the scope of the installation, none of the virtualization tools may be installed on your system. They will be automatically installed when configuring the hypervisor with the YaST module › . In case this module is not available in YaST, install the package yast2-vm.
To install KVM and KVM tools, proceed as follows:
Verify that the yast2-vm package is installed. This package is YaST's configuration tool that simplifies the installation of virtualization hypervisors.
Start YaST and choose › .
Select for a minimal installation of
QEMU tools. Select if a
libvirt-based management stack is also desired. Confirm with
.
To enable normal networking for the VM Guest, using a network bridge is recommended. YaST offers to automatically configure a bridge on the VM Host Server. Agree to do so by choosing , otherwise choose .
After the setup has been finished, you can start setting up VM Guests. Rebooting the VM Host Server is not required.
To install Xen and Xen tools, proceed as follows:
Start YaST and choose › .
Select for a minimal installation of
Xen tools. Select if a
libvirt-based management stack is also desired. Confirm with
.
To enable normal networking for the VM Guest, using a network bridge is recommended. YaST offers to automatically configure a bridge on the VM Host Server. Agree to do so by choosing , otherwise choose .
After the setup has been finished, you need to reboot the machine with the Xen kernel.
If everything works as expected, change the default boot kernel with YaST and make the Xen-enabled kernel the default. For more information about changing the default kernel, see Book “Reference”, Chapter 12 “The Boot Loader GRUB 2”, Section 12.3 “Configuring the Boot Loader with YaST”.
To install containers, proceed as follows:
Start YaST and choose › .
Select and confirm with .
It is possible using Zypper and patterns to install virtualization
packages. Run the command zypper in -t pattern
PATTERN. Available patterns are:
kvm_server: sets up the
KVM VM Host Server with QEMU tools for management
kvm_tools: installs the
libvirt tools for managing and monitoring VM Guests
xen_server: sets up the
Xen VM Host Server with Xen tools for management
xen_tools: installs the
libvirt tools for managing and monitoring VM Guests
There is no pattern for containers; install the libvirt-daemon-lxc package.
UEFI support is provided by OVMF (Open Virtual Machine Firmware). To enable UEFI boot, first install the qemu-ovmf-x86_64 or qemu-uefi-aarch64 package.
The firmware used by virtual machines is auto-selected. The auto-selection
is based on the *.json files in the
qemu-ovmf-ARCH package. The
libvirt QEMU driver parses those files when loading so it knows the
capabilities of the various types of firmware. Then when the user selects the type
of firmware and any desired features (for example, support for secure boot),
libvirt will be able to find a firmware that satisfies the user's
requirements.
For example, to specify EFI with secure boot, use the following configuration:
<os firmware='efi'> <loader secure='yes'/> </os>
The qemu-ovmf-ARCH packages contain the following files:
root #rpm -ql qemu-ovmf-x86_64[...] /usr/share/qemu/ovmf-x86_64-ms-code.bin /usr/share/qemu/ovmf-x86_64-ms-vars.bin /usr/ddshare/qemu/ovmf-x86_64-ms.bin /usr/share/qemu/ovmf-x86_64-suse-code.bin /usr/share/qemu/ovmf-x86_64-suse-vars.bin /usr/share/qemu/ovmf-x86_64-suse.bin [...]
The *-code.bin files are the UEFI firmware files.
The *-vars.bin files are corresponding variable
store images that can be used as a template for a per-VM non-volatile
store. libvirt copies the specified vars
template to a per-VM path under
/var/lib/libvirt/qemu/nvram/ when first
creating the VM. Files without code or
vars in the name can be used as a single UEFI
image. They are not as useful since no UEFI variables persist
across power cycles of the VM.
The *-ms*.bin files contain Microsoft keys as
found on real hardware. Therefore, they are configured as the default in
libvirt. Likewise, the *-suse*.bin files
contain preinstalled SUSE keys. There is also a set
of files with no preinstalled keys.
For details, see Using UEFI and Secure Boot and http://www.linux-kvm.org/downloads/lersek/ovmf-whitepaper-c770f8c.txt.
KVM's nested virtualization is still a technology preview. It is provided for testing purposes and is not supported.
Nested guests are KVM guests run in a KVM guest. When describing nested guests, we will use the following virtualization layers:
A bare metal host running KVM.
A virtual machine running on L0. Because it can run another KVM, it is called a guest hypervisor.
A virtual machine running on L1. It is called a nested guest.
Nested virtualization has many advantages. You can benefit from it in the following scenarios:
Manage your own virtual machines directly with your hypervisor of choice in cloud environments.
Enable the live migration of hypervisors and their guest virtual machines as a single entity.
Use it for software development and testing.
To enable nesting temporarily, remove the module and reload it with the
nested KVM module parameter:
For Intel CPUs, run:
tux >sudomodprobe -r kvm_intel && modprobe kvm_intel nested=1
For AMD CPUs, run:
tux >sudomodprobe -r kvm_amd && modprobe kvm_amd nested=1
To enable nesting permanently, enable the nested KVM
module parameter in the /etc/modprobe.d/kvm_*.conf file,
depending on your CPU:
For Intel CPUs, edit /etc/modprobe.d/kvm_intel.conf
and add the following line:
options kvm_intel nested=Y
For AMD CPUs, edit /etc/modprobe.d/kvm_amd.conf and
add the following line:
options kvm_amd nested=Y
When your L0 host is capable of nesting, you will be able to start an L1 guest in one of the following ways:
Use the -cpu host QEMU command line option.
Add the vmx (for Intel CPUs) or the
svm (for AMD CPUs) CPU feature to the
-cpu QEMU command line option, which enables
virtualization for the virtual CPU.